![]() |
Can the subtraction in the lucal lehmer test ever return a negative value?
I am currently working on this lovely project: [URL]https://github.com/BrainStone/MersennePrime-Benchmark[/URL]
And naturally I'd like to make sure that my LL implementations are correct. Now I've been worrying about whether the subtraction part in the LL test can ever return -1 or -2. What I'm talking about specifically is that for potential optimizations of the calculation, I take the mod after the multiplication but not after the subtraction (as that shouldn't really matter how many times I mod or not). So essentially like that: [$]S\left(k+1\right) \equiv \left(S\left(k\right)^2 mod M_p\right) - 2[/$] Why am I interested in those cases? Simply because I want to test for edge cases as I will use the [C]remainder[/C] method which is like mod but can return negative values. And I just want to make sure that those implementations also pass these edge cases. From my preliminary search it didn't seem like there were any such cases. So my question is if that could even happen at all. And if it can happen, are there known cases for both Mersenne Primes and Mersenne Numbers that aren't primes (don't want false positives, not false negatives)? |
1 is a quadratic residue mod p, so I'd say yes, you might see a value of 1 during the LL test. But it goes into loop 1,1,1,1 etc.
|
[QUOTE=retina;539914]1 is a quadratic residue mod p, so I'd say yes, you might see a value of 1 during the LL test. But it goes into loop 1,1,1,1 etc.[/QUOTE]
I'm performing a LL test. So there's always the subtraction before the next squaring. |
[QUOTE=BrainStone;539915]I'm performing a LL test. So there's always the subtraction before the next squaring.[/QUOTE]Yes, I know.
1 - 2 = -1 (-1)^2 = 1 |
I feel stupid now...
In any case I'm wondering if that value of 1 (or -1) could ever be reached in the first place, as a search with primes up to around 7000 hadn't resulted in any of such cases |
[QUOTE=BrainStone;539913]I am currently working on this lovely project: [URL]https://github.com/BrainStone/MersennePrime-Benchmark[/URL]
And naturally I'd like to make sure that my LL implementations are correct. Now I've been worrying about whether the subtraction part in the LL test can ever return -1 or -2. What I'm talking about specifically is that for potential optimizations of the calculation, I take the mod after the multiplication but not after the subtraction (as that shouldn't really matter how many times I mod or not). [/QUOTE] Huh? This last statement makes no sense. One computes the remainder once per iteration. More than that only slows the computation [dramatically!] And if using a DWT one does not explicitly compute a remainder at each iteration. [QUOTE] So essentially like that: [$]S\left(k+1\right) \equiv \left(S\left(k\right)^2 mod M_p\right) - 2[/$] Why am I interested in those cases? Simply because I want to test for edge cases as I will use the [C]remainder[/C] method which is like mod but can return negative values. And I just want to make sure that those implementations also pass these edge cases. From my preliminary search it didn't seem like there were any such cases. So my question is if that could even happen at all. And if it can happen, are there known cases for both Mersenne Primes and Mersenne Numbers that aren't primes (don't want false positives, not false negatives)?[/QUOTE] May I suggest that before going any further you need to learn how modular arithmetic works and also get a correct statement of how the LL test works?? You also need to study modular multiplication in detail. No offense, but you should study the underlying arithmetic before writing code. You seem to be confused. (1) The iteration is S_{n+1} = [(S_n)^2 - 2] mod M_p. (2) Do you understand that whether negative numbers exist when doing modular arithmetic depends on the representation that you choose? Arithmetic mod N can use a representation: [0, N-1] or it can use a representation [-(N-1)/2, (N-1)/2]] (for odd N). A number such as -2 mod N can use the -2 representation or it can use N-2. You should also learn that numbers mod N fall into equivalence classes. Most people erroneously believe that the expression (c = a mod N) means c equals the remainder when a is divided by N. But this is not the correct mathematical definition. The correct definition is that (c = a mod N) only means that (c-a) is divisible by N. Numbers mod N fall into [i]equivalence classes[/i]. Thus, (c = a mod N) is not an equation. It is an equivalence relation. (3) When performing an LL test you should also understand that you are actually computing an exponentiation in the twisted subgroup of the finite field GF(q^2) where q = M_p. It is quite similar to doing ordinary modular exponentiation in the unit group of Z/NZ. You are simply working in a different group. The test is really analogous to validating the order of an element in a unit group. Whereas an ordinary PRP test checks that the order is q-1 in the unit group, the LL test checks that the order is q+1 in the twisted subgroup. You should also study Lagrange's theorem. [understanding this last part is necessary only if you want to learn how/why the test works]. |
BrainStone: It doesn't matter. When you do the subtraction, if you get an overflow then adjust it to within range, if you don't, then do nothing. I don't see the big deal, it might never overflow and then it costs you nothing. Then it will work for ALL values; rather than praying, hoping and guessing that it [i]might[/i] work.
[size=1]Premature optimisation?[/size] |
This interests me too because I implemented the LLR test (for Riesel primes, but works on Mersenne primes too, just set k=1 )[ [URL]https://github.com/dkull/rpt/[/URL] [URL="https://github.com/dkull/rpt]"]][/URL] and thought about this myself. The test is very similar to the LL test. Something is gnawing in me and saying that it can't become one... though I am really interested in someones educated opinion.
You probably won't find a case by brute force, because the modulus gets really large really fast and thus the chance (assuming basically random chance - and if it's even possible) of landing on 1 after the mod operation would become practically nonexistent. EDIT: I wouldn't be the least surprised if it actually can become one |
R.D. Silverman, I have a fair understanding of modulus and equivalence classes.
And I do know that the iteration is [$]S \left( n + 1 \right) \equiv \left( {S \left( n \right)}^2 - 2 \right) mod M_p[/$]. The math here is pretty clear. However in terms of optimization we are in the realms of programming and computer science. And while they are inherently linked there are some differences. Now you mentioned that I should only do one modulus per iteration as else it will cost a lost of performance. Before my testings I would have agreed. But what you think is fastest is not always the fastest and that's why I want to benchmark it. Now I'm fully aware that Java's BigInteger class is terribly slow. I'm using it regardless because it is a built in and I'm more or less just playing around. Another thing about that class is that all values are immutable. So any arethmetic operation creates a new instance (makes the 2 mods vs 1 mod per iteration problem sound like it's even clearer that you should only mod once, right?). So among others I tried these 3 iteration steps: [LIST][*][C]S.multiply(S).subtract(2).mod(M_p)[/C][*][C]S.multiply(S).mod(M_p).subtract(2)[/C][*][C]S.multiply(S).mod(M_p).subtract(2).mod(M_p)[/C][/LIST]My current benchmarks indicate that the last is the fastest, the second only marginally slower and the first is noticable slower than the other two. Which was quite surpising. But I digress. Now onto the real issue and why even I brought up negative numbers. In many C-like languages there is a modulus/remainder operator: [C]%[/C] It works great, but it has one flaw: [C]-1 % 10 -> -1[/C] Or put in words, if the left operand is negative, the result is negative too. Forcing a postive result is computationally more expensive. Now in the above case the [C]mod[/C] function behaves like the mathematical modulus and only ever returns positive values. But there's also the [C]remainder[/C] function. Which behaves like the [C]%[/C] operator. Now since my guess is that that function is faster I want to test that function too. However I also want to make sure that the implementation is correct, so I have some test cases set up. And to be sure that the implementation handles negative numbers correctly I was looking for either known cases or interested to find out if it could actually happen. Does it make more sense now? |
You are using Java, and you feel the need to optimise it! That is dumb. You don't optimise Java, you rewrite it some other language, like assembly, and then optimise once you have it working.
|
[QUOTE=retina;539931]You are using Java, and you feel the need to optimise it! That is dumb. You don't optimise Java, you rewrite it some other language, like assembly, and then optimise once you have it working.[/QUOTE]
I never claimed it was a good choice. My concerns about negative numbers are independent of that hower because calculating the modulus like I described is fairly common in computer science. And I've seen my fair share of big int libraries that calculate their mod like that. And yes there is a need. Because between the best and worst method I've tested so far, the fastest is roughly twice as fast as the slowest. |
Prime95, LLR64, and even my own implementation use the gwnum library for the core multiplication/subtraction/modulus, not the builtin %, which works only on native ints. Before I wrote my final implementation I created a demo in Python, with GMP doing the heavy lifting. This solution was several times slower than the final compiled+gwnum solution. This is due to relatively slow loops in Python (an issue Java shouldn't have due to probably effective jit-ing of loops), but GMP itself was too slow for this exact problem. gwnum handles the -2 almost for free, and it's geared towards being very good with multiplication/products mod N. Also it transfers some info from one multiplication to the next (with some special calls) which also speeds it up. Naive solutions are going to be dead slow. gwnum also parallelizes the calculation, uses intrinsics and uses handwritten assembly for the FFT.
|
Have you tried implementing S mod Mp as (S & Mp) + (S >> p) ? This ought to be much faster than generic Java biginteger mod. Then you can add additional safety checks to your heart's content.
|
[QUOTE=BrainStone;539934]I never claimed it was a good choice.
My concerns about negative numbers are independent of that hower because calculating the modulus like I described is fairly common in computer science. And I've seen my fair share of big int libraries that calculate their mod like that. And yes there is a need. Because between the best and worst method I've tested so far, the fastest is roughly twice as fast as the slowest.[/QUOTE]But all your tests are with Java. So they mean nothing when you start to use a proper and useful implementation. You are wasting your time. |
[QUOTE=kuratkull;539935]Prime95, LLR64, and even my own implementation use the gwnum library for the core multiplication/subtraction/modulus, not the builtin %, which works only on native ints.[/QUOTE]
Of course. As I said I've seen libraries that implement their [C]mod[/C] to behave like the [C]%[/C] operator would. [QUOTE=axn;539936]Have you tried implementing S mod Mp as (S & Mp) + (S >> p) ? This ought to be much faster than generic Java biginteger mod. Then you can add additional safety checks to your heart's content.[/QUOTE] No, I haven't. I'll certainly give this a try. [QUOTE=retina;539937]But all your tests are with Java. So they mean nothing when you start to use a proper and useful implementation. You are wasting your time.[/QUOTE] Whether I waste my time or not is still on my part, isn't it? I do it because I want to familiarize myself with microbenchmarking in Java and because I enjoy playing around with numbers like that. I never claimed to be productive however. Though keep in mind that there will always be legitemate reasons to calculate LL in pure Java. |
[QUOTE=axn;539936]Have you tried implementing S mod Mp as (S & Mp) + (S >> p) ? This ought to be much faster than generic Java biginteger mod. Then you can add additional safety checks to your heart's content.[/QUOTE]
That calculation does not seem to work. |
[QUOTE=BrainStone;539940]That calculation does not seem to work.[/QUOTE]
Can you post your code snippet? EDIT:- For some values, you might have to do a second reduction. |
[QUOTE=BrainStone;539930]R.D. Silverman, I have a fair understanding of modulus and equivalence classes.
And I do know that the iteration is [$]S \left( n + 1 \right) \equiv \left( {S \left( n \right)}^2 - 2 \right) mod M_p[/$]. The math here is pretty clear. However in terms of optimization we are in the realms of programming and computer science. And while they are inherently linked there are some differences. [/QUOTE] If your concern is actual computer implementation of a library then I must ask: Why post in the [b][i] math [/i][/b] subforum?? Your question is not math-related. There is a programming subforum. Use it instead. |
[QUOTE=axn;539946]Can you post your code snippet?
EDIT:- For some values, you might have to do a second reduction.[/QUOTE] [CODE] public static final BigInteger fastMod(BigInteger number, BigInteger candidate, int prime) { return number.and(candidate).add(number.shiftRight(prime)); } [/CODE] [C]number[/C] is the [$]S[/$] in the LL test, [C]candidate[/C] is [$]M_p[/$] and [C]prime[/C] is [$]p[/$]. |
[QUOTE=R.D. Silverman;539947]If your concern is actual computer implementation of a library then I must ask:
Why post in the [B][I] math [/I][/B] subforum?? Your question is not math-related. There is a programming subforum. Use it instead.[/QUOTE] Because my question is if there's ever a case where [$]S(n) \equiv 1 \mod M_p[/$]. Why I am asking this is merely giving context. |
[QUOTE=retina;539920]BrainStone: It doesn't matter. When you do the subtraction, if you get an overflow then adjust it to within range, if you don't, then do nothing. I don't see the big deal, it might never overflow and then it costs you nothing. Then it will work for ALL values; rather than praying, hoping and guessing that it [i]might[/i] work.
[size=1]Premature optimisation?[/size][/QUOTE] This is supposed to be the [b]math[/b] subforum. :direction: |
[QUOTE=axn;539936]Have you tried implementing S mod Mp as (S & Mp) + (S >> p) ? This ought to be much faster than generic Java biginteger mod. Then you can add additional safety checks to your heart's content.[/QUOTE]
[C]bitand[/C] and [C]&[/C] behave differently in Pari/GP. Is there a bit wise and for Java? |
[QUOTE=R.D. Silverman;539950]This is supposed to be the [b]math[/b] subforum.[/quote]
I am moving thread to "Programming". |
I'm perfectly happy have the programming part of this discussion moved somewhere else.
Though I do believe the question if [$]\exists S \left( n \right) \equiv 1 \mod M_p[/$] belongs here. |
[QUOTE=BrainStone;539953]I'm perfectly happy have the programming part of this discussion moved somewhere else.
Though I do believe the question if [$]\exists S \left( n \right) \equiv 1 \mod M_p[/$] belongs here.[/QUOTE] It does. And I answered. Look up Lagrange's Theorem. In GF(q^2) 4 is a generator of the full twisted group iff q is prime. Otherwise, 4 generates a subgroup of the full twisted group. |
[QUOTE=R.D. Silverman;539954]It does. And I answered. Look up Lagrange's Theorem. In GF(q^2) 4 is a generator
of the full twisted group iff q is prime. Otherwise, 4 generates a subgroup of the full twisted group.[/QUOTE] I will move it back :raman: and leave it up to another mod to pick apart the Java aspects. |
[QUOTE=paulunderwood;539956]I will move it back :raman: and leave it up to another mod to pick apart the Java aspects.[/QUOTE]
Don't worry. I'll open a new thread myself there soon. |
Continuation of: Can the subtraction in the lucal lehmer test ever return a negative value?
The original [URL="https://www.mersenneforum.org/showthread.php?t=25373"]topic[/URL] which was about certain edge cases in the Lucas-Lehmer-Test derailed fairly quickly and I'd like to continue the programming discussion here.
So @axn, any idea what seems to be wrong with your code? And in Java [C]&[/C] is the bitwise AND operator @paulunderwood. And so is the [C]and[/C] method here. |
[QUOTE=R.D. Silverman;539954]It does. And I answered. Look up Lagrange's Theorem. In GF(q^2) 4 is a generator
of the full twisted group iff q is prime. Otherwise, 4 generates a subgroup of the full twisted group.[/QUOTE] That is a fair bit above my knowledge. Could you elaborate? Also @bearnol would you putting your doubts about his statements into words here too? The programming discussion may continue here: [url]https://www.mersenneforum.org/showthread.php?p=539988[/url] |
[QUOTE=BrainStone;539989]That is a fair bit above my knowledge. Could you elaborate?
[/QUOTE] An full explanation would require quite a bit of prerequisite knowledge; At least a semester of group theory and another semester of studying finite fields. Perhaps the best source for the latter is Lidl & Niederreiter's book on finite fields. |
[CODE]p=13;n=2^p-1;S=4;
for(k=3,p,S=S^2-2; S=bitand(S,n)+(S>>p);if(S>=n,S-=n); print([p,S])) [/CODE] Gives the require output: [CODE][13, 14] [13, 194] [13, 4870] [13, 3953] [13, 5970] [13, 1857] [13, 36] [13, 1294] [13, 3470] [13, 128] [13, 0] [/CODE] |
[QUOTE=paulunderwood;539995][CODE]f(S>=n,S-=n);[/CODE][/QUOTE]
That fixed it. Now the only thing is making benchmarks. Thank you |
Threads merged and moved to programming.
There was no reason to split threads, the initial thread already gave all the answers, including from RDS, who was actually quite right. The fact you don't like the answers is not a reason to clutter. You are using java (terrible slow, interpreted, not compiled, no FFT multiplication, which translates to [B]thousands of times slower[/B] than a "normal" gmp or gwnum implementation, and it is limited for integer size to 40k bytes or so, therefore completely futile to test any reasonable exponent), but in the same time you seem to be quite nitpicking on subtracting 2 before or after taking the modular from the square. People gave you already a better way to avoid division taking advantage of the binary form of the modulus, which would double the speed (not in pari, bitand is terrible slow there), with or without subtracting 2, and you are still fixed on that subtracting 2. C'mon man, it is like racing a tractor against a ferrary and trying to make your tractor faster by changing the position of the mirror or something, to avoid "catching the wind", or throwing out the knob of the handbrake because "it is too heavy". :razz: Listen to these guys, they know what they are talking. |
[QUOTE=LaurV;540019]Threads merged and moved to programming.
There was no reason to split threads, the initial thread already gave all the answers, including from RDS, who was actually quite right. The fact you don't like the answers is not a reason to clutter. [/QUOTE] Yes there was. This wasn't a question about programming but rather about certain edge cases. The programming part was merely providing context as to why I was interested in it. [QUOTE=LaurV;540019]You are using java (terrible slow, interpreted, not compiled, no FFT multiplication, which translates to [B]thousands of times slower[/B] than a "normal" gmp or gwnum implementation, and it is limited for integer size to 40k bytes or so, therefore completely futile to test any reasonable exponent)[/QUOTE] I fail to see the relevance. My goal is not to create a fast implementation. My goal is to create the fastest implementation under some restrictions. Which in this case are using the BigInteger library in Java. I never claimed that it would make sense to do so. I am doing it because I [I]enjoy[/I] playing with numbers. And also because I want to practise optimizing things in Java. Because that's part of my job. And all those things are irrelevant to the underlying math question. [QUOTE=LaurV;540019], but in the same time you seem to be quite nitpicking on subtracting 2 before or after taking the modular from the square. People gave you already a better way to avoid division taking advantage of the binary form of the modulus, which would double the speed (not in pari, bitand is terrible slow there), with or without subtracting 2, and you are still fixed on that subtracting 2. C'mon man, it is like racing a tractor against a ferrary and trying to make your tractor faster by changing the position of the mirror or something, to avoid "catching the wind", or throwing out the knob of the handbrake because "it is too heavy". :razz: Listen to these guys, they know what they are talking.[/QUOTE] In summary I wasn't aware of the fast mod. So I tried optimizing the small things as those were the obvious things to try. And my obsession with -2 comes from the fact that I'd like to make sure that the implementations are 100% correct. Hence I'd like to make sure they pass edge cases. And [$]S(n) = 1[/$] would be such an edge case. But in turn let me ask you this: What is your (all) obsession with essentially telling me I'm not allowed to (or shouldn't) do things I enjoy for the sake of doing them? I mean think about it. The entire premisse of this forum is based on a futile matter. Finding Mersenne Primes has no merrit to it other than creating utterly obscure knowledge that most likely will never be relevant. And why do people do that? Because they enjoy doing it or because they are curious. The exact same applies to what I am doing. Now you might say things like "But math is math. Every bit of knowledge is valuable. And one day it might be important". And to that I say "Rightfully so. And knowing which way to implement LL using Java's BigInteger is the fastest is exactly the same." Or "While doing so many important things have been discovered along the way". And my response would be that the same applies to my case. Even if only I learn doing it. And regarding R.D. Silverman's answer: I asked a yes-no question (does such a case exist). And the reply was a mathematical explanation that I don't understand. So while this may be answer to you, it isn't to me. I can't tell if their answer means "Yes, such cases are possible" or "No. No such cases are possible". Though if I had to guess, it's a yes and that we haven't found any such cases so far. But I get it. Uneducated people are not welcome here and neither are people that enjoy doing "futile" things for the sake of it. |
[QUOTE=BrainStone;540036]But I get it. Uneducated people are not welcome here and neither are people that enjoy doing "futile" things for the sake of it.[/QUOTE]
So you got criticized. Doesn't mean you or your ideas are unwelcome. But doesn't mean your ideas get a free pass either. You are free to choose which criticisms you take to heart and which you ignore. Anyway, do you want to know what's funny about your concerns with encountering negative numbers? It doesn't matter! Java biginteger uses signed integers, and so even if you run into negative numbers, the squaring operation in the next iteration will still give you correct value! Try it. One thing that bugs me though. You said that S.multiply(S).mod(M_p).subtract(2).mod(M_p) benchmarked a little bit faster than S.multiply(S).mod(M_p).subtract(2). Since the former does strictly more operations than the latter, I have to assume that perhaps your benchmarking is somehow flawed. Maybe they're close enough that you're measuring statistical variation, but I can't believe that the first one can ever be faster than the second. |
Doing some back-of-the-napkin math here, which might not be correct - I am not a mathematician, but a software engineer.
We have N=2^n-1. Let's assume n "tries" to get 1 as a result of the mod operation. The result can be 0..N.. So every single try has 1/N chance of being 1. (I am leaving out the fact that the result first has to reach N) So the chance of a candidate becoming 1 is roughly: 1/(2^n)*n This chance becomes miniscule very quickly, eg. n=64 is circa 10^-18. n=128 is circa 10^-37 So it's practically impossible to hit that condition. Am I right? |
[QUOTE=axn;540038]So you got criticized. Doesn't mean you or your ideas are unwelcome. But doesn't mean your ideas get a free pass either. You are free to choose which criticisms you take to heart and which you ignore.[/QUOTE]
We're well past critizism. We're at the point that I quite frankly feel ridiculed. And most of the thread has been critizism of futile points and things not related to the question. There's a saying that if you don't have anything nice or constructive to say, don't say anything. And critizing me for using an inefficient (in the grand scheme of things) method of doing things when absolute efficiency isn't a concern of mine is not constructive. And said point has been repeated numerous times already. [QUOTE=axn;540038]Anyway, do you want to know what's funny about your concerns with encountering negative numbers? It doesn't matter! Java biginteger uses signed integers, and so even if you run into negative numbers, the squaring operation in the next iteration will still give you correct value! Try it.[/QUOTE] At this point I agree. I know that it is a signed type. My initial concerns were that hitting a negative value [i]might[/i] result in false negatives. Though since hitting a negative value will always result in a loop there can't be any false negatives. And while I initially doubted that it could matter at all, I wasn't sure initially, so I wanted to have test cases that covered it. So I am aware that this discussion only has a pure mathematical value left, which is makes me disagree even more with the decision to merge the threads and move it into Programming, where it now is severely misplaced. [QUOTE=axn;540038]One thing that bugs me though. You said that S.multiply(S).mod(M_p).subtract(2).mod(M_p) benchmarked a little bit faster than S.multiply(S).mod(M_p).subtract(2). Since the former does strictly more operations than the latter, I have to assume that perhaps your benchmarking is somehow flawed. Maybe they're close enough that you're measuring statistical variation, but I can't believe that the first one can ever be faster than the second.[/QUOTE] It's very well possible that that's the case. Though since it is Java and Java handles memory in a strange way it could be that through memory schenanigans it does weird stuff. |
[QUOTE=kuratkull;540039]Doing some back-of-the-napkin math here, which might not be correct - I am not a mathematician, but a software engineer.
We have N=2^n-1. Let's assume n "tries" to get 1 as a result of the mod operation. (I am leaving out the fact that the result first has to reach N) The result can be 0..N.. So every single try has 1/N chance of being 1. [/QUOTE] You are not listening. And despite your claim, it is clear that you do not understand modular arithmetic. In particular you did not listen to my discussion about representations. Arithmetic mod N can never yield a result that is in [0,.....N] because N is not part of the representation. It can be [0,....N-1]. Because, trivially N mod N equals 0. When working mod N the expression [0,....N] is not a set. It is a multi-set. Secondly, your assertion that "The result can be 0..N.. So every single try has 1/N chance of being 1." is also wrong. There are [b] N+1 [/b] elements in 0,....N , not N. Your statement also assumes a uniform pdf which assuredly it is [b]NOT[/b]. You are a software engineer trying to implement an algorithm whose correct implementation requires knowing a lot of math. Multiple people have told you that the "-2" is irrelevant to the run-time. Yet you continue to argue with them. [QUOTE] So the chance of a candidate becoming 1 is roughly: 1/(2^n)*n [/QUOTE] I will assume that by the word "becoming" you mean "becoming before reaching the last iteration". Your assertion is totally wrong. If N = 2^p-1 is prime the probability is zero. If N is composite the probability depends on the factors of N+1 and the structure of the subgroups of the twisted group, i.e. it depends on the number of elements whose order is r for r less than the full order. The values of r depend on the factors of N+1. [QUOTE] This chance becomes miniscule very quickly, eg. n=64 is circa 10^-18. n=128 is circa 10^-37 So it's practically impossible to hit that condition. Am I right?[/QUOTE] You are not. It is totally impossible to hit it if N is prime. If N is composite see above. You state that you are not a mathematician, yet you make mathematical pronouncements when you don't know the math. I call that arrogant. Stop making pronouncements and start listening. You will need to learn some modern algebra, some number theory, and some statistical theory. |
[QUOTE=kuratkull;540039]Doing some back-of-the-napkin math here, which might not be correct - I am not a mathematician, but a software engineer.
We have N=2^n-1. Let's assume n "tries" to get 1 as a result of the mod operation. The result can be 0..N.. So every single try has 1/N chance of being 1. (I am leaving out the fact that the result first has to reach N) So the chance of a candidate becoming 1 is roughly: 1/(2^n)*n This chance becomes miniscule very quickly, eg. n=64 is circa 10^-18. n=128 is circa 10^-37 So it's practically impossible to hit that condition. Am I right?[/QUOTE] I fully agree with your reasoning (Turns out that was wrong. That's why I like to ask knowledgable people ;) ). But practically impossible doesn't mean impossible. I mean Mersenne Primes become increasingly less likely but there most likely are more and it's certainly beneficial to consider rare edge cases. Though as pointed out, programmically speaking it's a non concern. So now I'm just genuinely curious as to whether or not those cases might exist. |
[QUOTE=R.D. Silverman;540041]If N is
composite the probability depends on the factors of N+1 and the structure of the subgroups of the twisted group, i.e. it depends on the number of elements whose order is r for r less than the full order. The values of r depend on the factors of N+1.[/QUOTE] Do I understand correctly that there are cases that consequently allow it to become 1? |
[QUOTE=BrainStone;540040]We're well past critizism. We're at the point that I quite frankly feel ridiculed.[/QUOTE]Chill. Everyone here who posts more than few times gets ridiculed eventually. And I mean everyone.
Learn from it by not repeating your error / naivety / whatever and any cases which can be easily deduced from the reaction to it. |
[QUOTE=BrainStone;540040]We're well past critizism. We're at the point that I quite frankly feel ridiculed. And most of the thread has been critizism of futile points and things not related to the question.
There's a saying that if you don't have anything nice or constructive to say, don't say anything. And critizing me for using an inefficient (in the grand scheme of things) method of doing things when absolute efficiency isn't a concern of mine is not constructive. And said point has been repeated numerous times already. [/QUOTE] I gave a complete (expressed in elementary terms) mathematical explanation. A mature person who did not understand it, yet still desiring to do this work would take the time to go learn the math. Yes, it takes a [b]lot[/b] of time. One can't correctly implement a mathematical algorithm without understanding the math. The first question you should have asked was "What do I need to learn in order to correctly implement this algorithm". [QUOTE] At this point I agree. I know that it is a signed type. My initial concerns were that hitting a negative value [i]might[/i] result in false negatives. [/QUOTE] Yet, you did not ask that question. And you personally were not being criticized. Your [i]statements[/i] were. If you can't stand to have your [b]discussion[/b] criticized then you need to find another venue. |
R.D. Silverman - I think you mixed me up with the OP. I am not trying to implement anything, I already did: [url]https://github.com/dkull/rpt[/url]
Most of the stuff you commented on was left out/ignored for brevity. It bears no significant weight on the magnitude of the final calculation. I think starting with a disclaimer and ending with a question about correctness leaves little room for arrogance. |
[QUOTE=xilman;540044]Chill. Everyone here who posts more than few times gets ridiculed eventually. And I mean everyone.
Learn from it by not repeating your error / naivety / whatever and any cases which can be easily deduced from the reaction to it.[/QUOTE] What error did I do? I was upfront about that I know that what I'm doing isn't productive after my choice of programming language and library was (correctly) critizised the first time. I've then provided my motivation and justification to use it, which prompted unwarrented ridicule. It's like saying I want to race with a tractor (to pick up on a previous analogy). Then being critizised that that is a bad choice (which is correct). I proceeded to explain that I enjoy racing with tractors and that I don't want to anything but race tractors. Which then resulted people continuing to critize my choice of vehicle, even though it was clear that I was fully aware of the downsides and was chosing for very different reasons. That's just being rude and inappropriate. |
[QUOTE=BrainStone;540043]Do I understand correctly that there are cases that consequently allow it to become 1?[/QUOTE]
Please. This is a math discussion. Learn to state exactly what you mean. When you say "consequently", the obvious reply is "consequently to what?". State mathematically what condition you mean. Math is not "casual English". Do you not know what "order of an element [in a group]" means? It is a fundamental concept that you need to learn. And you also need to learn Lagrange's Theorem if you want to continue further. |
[QUOTE=kuratkull;540046]R.D. Silverman - I think you mixed me up with the OP. I am not trying to implement anything, I already did: [url]https://github.com/dkull/rpt[/url]
Most of the stuff you commented on was left out/ignored for brevity. It bears no significant weight on the magnitude of the final calculation. I think starting with a disclaimer and ending with a question about correctness leaves little room for arrogance.[/QUOTE] Mea culpa if I confused you with someone else. My apology. Yet you made some clearly incorrect mathematical statements. |
In the spirit of this thread I also want to write an HLL version of the the LL test. Why not? Following is some awesome python code. And as a bonus it also does a PRP test.[code]for p in range(3, 1280, 2):
mp, s = pow(2, p) - 1, 4 for i in range(2, p): s = (s * s - 2) % mp if s == 0: print("2^{}-1 is prime".format(p)) if pow(3, mp + 1, mp) == 9: print("2^{}-1 is PRP".format(p))[/code]Can some help me to optimise it? I need it to run really really fast. |
[QUOTE=retina;540056]In the spirit of this thread I also want to write an HLL version of the the LL test. Why not? Following is some awesome python code. And as a bonus it also does a PRP test.[code]for p in range(3, 1280, 2):
mp, s = pow(2, p) - 1, 4 for i in range(2, p): s = (s * s - 2) % mp if s == 0: print("2^{}-1 is prime".format(p)) if pow(3, mp + 1, mp) == 9: print("2^{}-1 is PRP".format(p))[/code]Can some help me to optimise it? I need it to run really really fast.[/QUOTE] :lol: print("3 5 7 13 17 19 31 61 89 107 127 521 607 1279") |
[QUOTE=R.D. Silverman;540050]Please. This is a math discussion. Learn to state exactly what you mean.
When you say "consequently", the obvious reply is "consequently to what?". State mathematically what condition you mean. Math is not "casual English". Do you not know what "order of an element [in a group]" means? It is a fundamental concept that you need to learn. And you also need to learn Lagrange's Theorem if you want to continue further.[/QUOTE] Is it possible that [$]\exists S \left( n \right) \equiv 1 \mod M_p[/$], where [$]S \left( 1 \right) = 4 \\ S \left( n + 1 \right) = {S \left( n \right)}^2 - 2 \\ M_p = 2^p - 1 \\ n < p, n \in \mathbb{N}, p \in \mathbb{P}[/$] |
[QUOTE=BrainStone;540060]Is it possible that [$]\exists S \left( n \right) \equiv 1 \mod M_p[/$], where
[$]S \left( 1 \right) = 4 \\ S \left( n + 1 \right) = {S \left( n \right)}^2 - 2 \\ M_p = 2^p - 1 \\ n < p, n \in \mathbb{N}, p \in \mathbb{P}[/$][/QUOTE] M_2 = 3: S_1 is 4 == 1 mod 3. If S_k^2-2 == 1 mod M_p would imply S_k^2 == 3 mod M_p, but 3 is a QNR over M_p for all prime p>2. Proof? |
[QUOTE=BrainStone;540048]What error did I do?[/QUOTE]
One error you have just made is to ignore the second and third options in my list. To jog your memory they werre "naivety" and "whatever". Naivety is both inevitable and forgivable[SUP]*[/SUP]; everyone is naive in any specific field until they learn not to be. Your naivety (in my opinion, others may differ) is in assuming that mathematicians invariably tolerate sloppy language when used to describe a problem which can be described precisely. Inherent in that sloppiness is a failure to distinguish clearly between the result of operators in programming languages and the mathematical concept of an equivalence class. "Whatever" is harder to characterize. One thing in your favour is your adherence to one of the laws of optimization: First get it right, then get it fast. Perhaps if you had stated this up-front your post may have had a different outcome. [QUOTE=BrainStone;540048]That's just being rude and inappropriate.[/QUOTE]Very likely so. Not everyone is polite. --- :paul: * I've been getting all sorts of :poop: since I joined Twitter because my Asperger's makes it difficult for me to read sub-texts in tweets from neurotypicals. I have been trying hard to learn and I believe that I'm improving but it still trips me up all too often. |
[QUOTE=BrainStone;540060]Is it possible that [$]\exists S \left( n \right) \equiv 1 \mod M_p[/$], where
[$]S \left( 1 \right) = 4 \\ S \left( n + 1 \right) = {S \left( n \right)}^2 - 2 \\ M_p = 2^p - 1 \\ n < p, n \in \mathbb{N}, p \in \mathbb{P}[/$][/QUOTE] Not if M_p is prime. If M_p is composite then 4 can generate a subgroup. |
| All times are UTC. The time now is 23:18. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.