![]() |
![]() |
#1 |
"Bill Staffen"
Jan 2013
Pittsburgh, PA, USA
23·53 Posts |
![]()
But I really did search the forums before asking, and read up on the relevant topics on the mersenne wiki and wikipedia.
I am trying to figure out why p-1 factorization is used over ECM for mersenne primes. Seems like ECM used to be a thing, and now it's not, and I don't know why. |
![]() |
![]() |
![]() |
#2 |
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
25×11×17 Posts |
![]()
All the factors of 2^p-1 are of the form 2*k*p+1. This means in the p-1 method we get the factor p for free. This makes it much easier to find factors using p-1 than ecm for this form of number. I imagine one day when the exponents get large enough some ecm will be done in addition to p-1. We have probably got a long way to go before that happens though.
|
![]() |
![]() |
![]() |
#3 |
"Bill Staffen"
Jan 2013
Pittsburgh, PA, USA
23·53 Posts |
![]()
Wow, that's crazy. I will work on that for a while, thanks much.
|
![]() |
![]() |
![]() |
#4 |
Jun 2003
14EF16 Posts |
![]()
Never gonna happen. As exponent grows larger, the TF level also goes higher. Thus useful ECM level also goes higher. It'll never breakeven. The handicap (not being able to leverage the p in 2kp+1) is too large (and getting larger).
|
![]() |
![]() |
![]() |
#5 | |
Jun 2003
49116 Posts |
![]() Quote:
ECM is used on Mersenne numbers, but usually only on small exponents below about 10,000,000. See this page. These are already known to be composite, and in some cases a factor is known. Finding (additional) factors, not primality testing is the goal here. For larger exponents, finding factors is nice, but the main goal is finding primes. We search for factors in the hope that a small investment of computer time will save us from having to conduct two expensive LL tests. The law of diminishing returns applies to factor-finding - the deeper we search, the fewer factors each extra GHz-day is likely to find. Eventually we reach the break-even point at which the expected cost of finding a factor is equal to the cost of doing the LLs. At this point, we stop. That break-even point, for ECM is none at all. Why? And why not for P-1? First P-1 is oveall a faster and less memory-hungry algorithm. Since P-1 will happily eat up all the memory I can throw at it - and I have 16GB - you can imagine how voracious ECM is. Second, as henryzz remarks above, P-1 can make use of the 2kp+1 form of Mersenne factors which means it will find factors about log2(p) bits larger than ECM would run to the same bounds. Finally, after we've already done TF and P-1, there just aren't enough small factors left to find. If P-1 is so much better than ECM, you may be wondering why use ECM at all? The answer is that ECM can be run again and again with the same bounds on the same number, and still have a chance of finding a new factor, while P-1 can be run only once. |
|
![]() |
![]() |
![]() |
#6 |
"Bill Staffen"
Jan 2013
Pittsburgh, PA, USA
23×53 Posts |
![]()
Thank you, that's very helpful. I would not have guessed that p-1 might be the 'less memory hungry' version of anything. :)
Last fiddled with by Aramis Wyler on 2013-04-13 at 18:01 |
![]() |
![]() |
![]() |
#7 |
"Phil"
Sep 2002
Tracktown, U.S.A.
111910 Posts |
![]()
At one level, P-1 and ECM are very similar: Both methods exploit group operations hoping to find a factor if the group order is smooth. Given a prime factor F, the group order in P-1 is fixed at F-1, and the group operation is multiplication mod F. In ECM, there are many group orders possible depending upon what curve is chosen, but the group orders are all in the range from F+1-2*sqrt(F) to F+1+2*sqrt(F), roughly the same size as F-1, so we would expect an a priori probability of any of these group orders to be smooth to be about the same. However, there are two considerations that make P-1 the more efficient method for eliminating LL candidates:
1) We know that the P-1 group order is divisible by 2p, where p is the Mersenne number exponent in question. ECM curves can be chosen so that the group order is divisible by either 12 or 16. Suppose trial factoring has already been done to 73 or so bits on a Mersenne number with a 26-bit exponent. Suppose that this number has a factor which is a little bit larger, say 80 bits. The group orders then are also 80 bit numbers, but since the P-1 group order is divisible by 2p, we find the factor when the cofactor of 80 - 27 = 53 bits is smooth to the factoring bounds. ECM only finds the factor when the cofactor of 80 - 4 = 76 bits is smooth, and the probability that a 76-bit factor is smooth is considerably less than for a 53-bit factor. Of course, if we really want to find that particular factor, and P-1 does not find it, we can run a number of ECM curves on the number, but in the context of GIMPS, it would be more efficient to move on and run P-1 on other numbers, eliminating more LL candidates in the long run. 2) The other advantage of P-1 is that the group operation, multiplication mod F, is straightforward and easy to implement. (Of course, we don't know F, so we do the multiplications mod 2p-1, but this also applies to ECM.) The group operation in ECM is more complicated, and requires about 6 times the amount of computations to perform one step than a step in P-1 takes. So one ECM curve to given factoring bounds should take about 6 times as long as a P-1 run to the same bounds. I have oversimplified some details, but I hope this is sufficient to get across the general reason why ECM on LL candidates just is not as efficient as doing P-1, and doing P-1 to higher bounds will find more candidates in the long run than ECM. Last fiddled with by philmoore on 2013-04-13 at 18:24 |
![]() |
![]() |
![]() |
#8 |
"Bill Staffen"
Jan 2013
Pittsburgh, PA, USA
23×53 Posts |
![]()
This may not make sence, but knowing what numbers are going to be hit by p-1 stage 1, would it be a significant gain to sieve those numbers out of the TF test?
|
![]() |
![]() |
![]() |
#9 |
"Phil"
Sep 2002
Tracktown, U.S.A.
21378 Posts |
![]()
Ideally, all numbers should be hit by the P-1 test before LL testing. But it makes sense to TF them to a certain level before doing P-1. TF finds some factors that P-1 will miss, and conversely, P-1 will find some factors that TF will miss.
|
![]() |
![]() |
![]() |
#10 |
Jun 2003
7×167 Posts |
![]()
To do that would effectively require the trial factorisation of q-1 for every candidate factor q of Mp. While this can indeed be done quickly using a sieve (it's how the quadratic sieve factoring algorithm works) it would still take far more time, and eliminate so few candidates, that the cost would exceed the benefit.
|
![]() |
![]() |
![]() |
#11 | |
Jun 2003
116910 Posts |
![]() Quote:
Prime95, by contrast will comfortably run P-1 on 20-million-digit numbers using that much memory, and if pressed, could make do with just a few hundred Meg. The downside is that with Prime95, stage 2 typically takes longer than stage 1, while B2 is typically about 50 times B1. With gmp-ecm stage 2 is much faster than stage 1 despite B2 being tens of thousands times larger than B1. |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
need some math help. | swl551 | Math | 2 | 2014-02-20 16:33 |
Math | ET_ | Operazione Doppi Mersennes | 4 | 2012-09-20 19:33 |
Math | JohnFullspeed | Forum Feedback | 1 | 2011-07-11 16:42 |
math help pls! | DSC | Miscellaneous Math | 2 | 2005-09-11 04:53 |
help with math | DSC | Homework Help | 13 | 2005-08-31 07:16 |