Can the subtraction in the lucal lehmer test ever return a negative value?
I am currently working on this lovely project: [URL]https://github.com/BrainStone/MersennePrimeBenchmark[/URL]
And naturally I'd like to make sure that my LL implementations are correct. Now I've been worrying about whether the subtraction part in the LL test can ever return 1 or 2. What I'm talking about specifically is that for potential optimizations of the calculation, I take the mod after the multiplication but not after the subtraction (as that shouldn't really matter how many times I mod or not). So essentially like that: [$]S\left(k+1\right) \equiv \left(S\left(k\right)^2 mod M_p\right)  2[/$] Why am I interested in those cases? Simply because I want to test for edge cases as I will use the [C]remainder[/C] method which is like mod but can return negative values. And I just want to make sure that those implementations also pass these edge cases. From my preliminary search it didn't seem like there were any such cases. So my question is if that could even happen at all. And if it can happen, are there known cases for both Mersenne Primes and Mersenne Numbers that aren't primes (don't want false positives, not false negatives)? 
1 is a quadratic residue mod p, so I'd say yes, you might see a value of 1 during the LL test. But it goes into loop 1,1,1,1 etc.

[QUOTE=retina;539914]1 is a quadratic residue mod p, so I'd say yes, you might see a value of 1 during the LL test. But it goes into loop 1,1,1,1 etc.[/QUOTE]
I'm performing a LL test. So there's always the subtraction before the next squaring. 
[QUOTE=BrainStone;539915]I'm performing a LL test. So there's always the subtraction before the next squaring.[/QUOTE]Yes, I know.
1  2 = 1 (1)^2 = 1 
I feel stupid now...
In any case I'm wondering if that value of 1 (or 1) could ever be reached in the first place, as a search with primes up to around 7000 hadn't resulted in any of such cases 
[QUOTE=BrainStone;539913]I am currently working on this lovely project: [URL]https://github.com/BrainStone/MersennePrimeBenchmark[/URL]
And naturally I'd like to make sure that my LL implementations are correct. Now I've been worrying about whether the subtraction part in the LL test can ever return 1 or 2. What I'm talking about specifically is that for potential optimizations of the calculation, I take the mod after the multiplication but not after the subtraction (as that shouldn't really matter how many times I mod or not). [/QUOTE] Huh? This last statement makes no sense. One computes the remainder once per iteration. More than that only slows the computation [dramatically!] And if using a DWT one does not explicitly compute a remainder at each iteration. [QUOTE] So essentially like that: [$]S\left(k+1\right) \equiv \left(S\left(k\right)^2 mod M_p\right)  2[/$] Why am I interested in those cases? Simply because I want to test for edge cases as I will use the [C]remainder[/C] method which is like mod but can return negative values. And I just want to make sure that those implementations also pass these edge cases. From my preliminary search it didn't seem like there were any such cases. So my question is if that could even happen at all. And if it can happen, are there known cases for both Mersenne Primes and Mersenne Numbers that aren't primes (don't want false positives, not false negatives)?[/QUOTE] May I suggest that before going any further you need to learn how modular arithmetic works and also get a correct statement of how the LL test works?? You also need to study modular multiplication in detail. No offense, but you should study the underlying arithmetic before writing code. You seem to be confused. (1) The iteration is S_{n+1} = [(S_n)^2  2] mod M_p. (2) Do you understand that whether negative numbers exist when doing modular arithmetic depends on the representation that you choose? Arithmetic mod N can use a representation: [0, N1] or it can use a representation [(N1)/2, (N1)/2]] (for odd N). A number such as 2 mod N can use the 2 representation or it can use N2. You should also learn that numbers mod N fall into equivalence classes. Most people erroneously believe that the expression (c = a mod N) means c equals the remainder when a is divided by N. But this is not the correct mathematical definition. The correct definition is that (c = a mod N) only means that (ca) is divisible by N. Numbers mod N fall into [i]equivalence classes[/i]. Thus, (c = a mod N) is not an equation. It is an equivalence relation. (3) When performing an LL test you should also understand that you are actually computing an exponentiation in the twisted subgroup of the finite field GF(q^2) where q = M_p. It is quite similar to doing ordinary modular exponentiation in the unit group of Z/NZ. You are simply working in a different group. The test is really analogous to validating the order of an element in a unit group. Whereas an ordinary PRP test checks that the order is q1 in the unit group, the LL test checks that the order is q+1 in the twisted subgroup. You should also study Lagrange's theorem. [understanding this last part is necessary only if you want to learn how/why the test works]. 
BrainStone: It doesn't matter. When you do the subtraction, if you get an overflow then adjust it to within range, if you don't, then do nothing. I don't see the big deal, it might never overflow and then it costs you nothing. Then it will work for ALL values; rather than praying, hoping and guessing that it [i]might[/i] work.
[size=1]Premature optimisation?[/size] 
This interests me too because I implemented the LLR test (for Riesel primes, but works on Mersenne primes too, just set k=1 )[ [URL]https://github.com/dkull/rpt/[/URL] [URL="https://github.com/dkull/rpt]"]][/URL] and thought about this myself. The test is very similar to the LL test. Something is gnawing in me and saying that it can't become one... though I am really interested in someones educated opinion.
You probably won't find a case by brute force, because the modulus gets really large really fast and thus the chance (assuming basically random chance  and if it's even possible) of landing on 1 after the mod operation would become practically nonexistent. EDIT: I wouldn't be the least surprised if it actually can become one 
R.D. Silverman, I have a fair understanding of modulus and equivalence classes.
And I do know that the iteration is [$]S \left( n + 1 \right) \equiv \left( {S \left( n \right)}^2  2 \right) mod M_p[/$]. The math here is pretty clear. However in terms of optimization we are in the realms of programming and computer science. And while they are inherently linked there are some differences. Now you mentioned that I should only do one modulus per iteration as else it will cost a lost of performance. Before my testings I would have agreed. But what you think is fastest is not always the fastest and that's why I want to benchmark it. Now I'm fully aware that Java's BigInteger class is terribly slow. I'm using it regardless because it is a built in and I'm more or less just playing around. Another thing about that class is that all values are immutable. So any arethmetic operation creates a new instance (makes the 2 mods vs 1 mod per iteration problem sound like it's even clearer that you should only mod once, right?). So among others I tried these 3 iteration steps: [LIST][*][C]S.multiply(S).subtract(2).mod(M_p)[/C][*][C]S.multiply(S).mod(M_p).subtract(2)[/C][*][C]S.multiply(S).mod(M_p).subtract(2).mod(M_p)[/C][/LIST]My current benchmarks indicate that the last is the fastest, the second only marginally slower and the first is noticable slower than the other two. Which was quite surpising. But I digress. Now onto the real issue and why even I brought up negative numbers. In many Clike languages there is a modulus/remainder operator: [C]%[/C] It works great, but it has one flaw: [C]1 % 10 > 1[/C] Or put in words, if the left operand is negative, the result is negative too. Forcing a postive result is computationally more expensive. Now in the above case the [C]mod[/C] function behaves like the mathematical modulus and only ever returns positive values. But there's also the [C]remainder[/C] function. Which behaves like the [C]%[/C] operator. Now since my guess is that that function is faster I want to test that function too. However I also want to make sure that the implementation is correct, so I have some test cases set up. And to be sure that the implementation handles negative numbers correctly I was looking for either known cases or interested to find out if it could actually happen. Does it make more sense now? 
You are using Java, and you feel the need to optimise it! That is dumb. You don't optimise Java, you rewrite it some other language, like assembly, and then optimise once you have it working.

[QUOTE=retina;539931]You are using Java, and you feel the need to optimise it! That is dumb. You don't optimise Java, you rewrite it some other language, like assembly, and then optimise once you have it working.[/QUOTE]
I never claimed it was a good choice. My concerns about negative numbers are independent of that hower because calculating the modulus like I described is fairly common in computer science. And I've seen my fair share of big int libraries that calculate their mod like that. And yes there is a need. Because between the best and worst method I've tested so far, the fastest is roughly twice as fast as the slowest. 
All times are UTC. The time now is 17:57. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2021, Jelsoft Enterprises Ltd.