![]() |
|
|
#562 | ||||||||
|
"Richard B. Woods"
Aug 2002
Wisconsin USA
22×3×641 Posts |
Quote:
Once you choose a B1 and B2, P-1 will find any factor that is of the form: (Product of any number of prime factors or prime powers no greater than B1) * (a prime factor or prime power no greater than B2) + 1 In the case of Mersennes, prime95 also throws in the Mersenne exponent (n), so that it will find any factor of the form: (Product of any number of prime factors or prime powers no greater than B1) * (a prime factor or prime power no greater than B2) * n + 1 That is deterministic, not probabilistic. Quote:
Quote:
If you TF to 2^75, you'll find any factor less than 2^75. If you subsequently TF from 2^75 to 2^76 will find any factor between 2^75 and 2^76. The same is true of P-1, except that the B1/B2 search space is multidimensional, not the simple one-dimensional number line that TF searches. Quote:
Quote:
Because the algorithm is based on properties of a multiplicative group, it does not matter what the starting number (prime95 always uses 3) is, just as long as it is coprime with the number being factored. As the Wikipedia article states, Quote:
Quote:
Quote:
Perhaps someone with a more practical understanding of multiplicative groups than my intuitive understanding can explain it better for you. Last fiddled with by cheesehead on 2011-05-12 at 21:57 |
||||||||
|
|
|
|
|
#563 | |
|
Sep 2006
The Netherlands
36 Posts |
Quote:
If we perform a brute force search we ain't gonna miss anything. Now TF is a very stupid form of brute force search. You do n bits then n+1 bits and so on. P-1 simply will sometimes find big factor for 1 prime, then miss a simple 63 bits factor for other prime. In fact odds it misses that 63 bits factor is *huge*. So the result is total random from brute force viewpoint seen. Missing small factors say < 128 bits is incredible childish for factoring algorithm considering system time put into P-1. What i call the hill there, you call of course correctly the multiplicative group; as a search expert i only see things from search viewpoint of course and i just see it's the wrong hill nearly always. Just in a few % of cases it's the correct hill. Very primitive. If we compare the public factorisation algorithms with progress in for example computerchess, then factorisation algorithms have stood still and are still years 70s so to speak compared to how search moved in computerchess. But of course there was public money to make with it in computerchess and lots of prestige. In factorisation publicly nothing happened that's worth even mentionning. Regards, Vincent Last fiddled with by diep on 2011-05-12 at 23:21 |
|
|
|
|
|
|
#564 |
|
Sep 2006
The Netherlands
36 Posts |
In fact, we've done P-1 at some tens of thousands of exponents now for Wagstaff just around and mostly above 8 million bits; it doesn't even break even. And that though we just trial factored to 61 bits so far. Still have to throw GPU into action to factor further.
Jeff Gilchrist also tried other forms of public factorisation. ECM and others. All been tried at his cluster. Nothing even remotely breaks even. Well let me rephrase that. The oldie k8's are of course not so great in SSE2. Just 1 instruction per cycle (2 flops per cycle per core) so from his viewpoint having them factor is ok as it's slow for VRB-Reix test anyway. Which of course all is 1 to 1 similar to a LL (Wagstaff has 100% same properties like Mersenne). Even at those k8's it's hardly break even. Just because it's nicer to have things factored than to maybe have 1 or 2 wrong residue's *maybe* we prefer the factorisation. So 50 years of factorisation or so, what has it brought in terms of public algorithms? Especially the past 16 years. *nothing* How comes? Regards, Vincent |
|
|
|
|
|
#565 | |
|
Dec 2010
Monticello
5·359 Posts |
Quote:
So here's my next question: If, for our P-1 operation, we are spending the majority of our time raising three to a high exponent, can we improve the algorithm time by saving the result, and then multiplying this by 3 to a much smaller power for the next, presumably nearby mersenne exponent we wish to work our P-1 on? |
|
|
|
|
|
|
#566 |
|
Jun 2003
10011110111102 Posts |
Can you provide some details on this? What were the bounds used? What was the run-time relative to a V-R test? What was the hit rate? How much memory was used? What software was used?
|
|
|
|
|
|
#567 | |||||||
|
"Richard B. Woods"
Aug 2002
Wisconsin USA
22×3×641 Posts |
Quote:
Quote:
I use the terminology "B1/B2 search space" to denote the two-dimensional space that P-1 searches for factors of N. Neither of its dimensions are one-dimensional integer number line that TF searches. Instead, in B1/B2 P-1 search space, one axis represents the largest prime or prime power factor of P-1 where P is any factor of N. The other axis represents the second largest prime or prime power factor of P-1 where P is any factor of N. Most intersections at integral coordinates of that space are empty, but each factor P of N occupies one intersection. The P-1 algorithm searches the rectangle with (0,0) at one corner and (B1,B2) at the diagonally opposite corner. If any factor of N resides in that rectangle, P-1 will definitely find it. Examples: The integer 19 corresponds to the intersection (2,9) in B1/B2 P-1 search space. 37 corresponds to (4,9). 127 corresponds to (7,9) in B1/B2 P-1 search space. 631 corresponds to the same intersection (7,9). Any numbers which have either 127 or 631 as a factor can be factored by P-1 with limits of B1=7 and B2=9. That same search (B1=7,B2=9) will find any other factor that exists (corresponds to an intersection) in the rectangle from (0,0) to (7,9). Intersection (4,5) corresponds to 61 and 101, which would also be found by a (7,9) search -- if either is a factor of the number being tested. If more than one prime factor is in the search rectangle, P-1 will find their product, so factors found by P-1 need to be checked for compositeness. Now, a mapping from that P-1 search space to some other space may transform the nice neat rectangle from (0,0) to (B1,B2) into a pattern of hills and valleys, but insisting that such hills and valleys are the proper way, or only way, to look at a P-1 search will lead one to misunderstandings such as considering P-1 to be a probabilistic method. In the most appropriate view of the P-1 search space, it is a deterministic rectangle on the other end of the mapping. Quote:
Quote:
Quote:
What the multiplicative group corresponds to is a rectangle in P-1 search space from (0,0) to (largest prime or prime power factor of P-1 where P is any factor of the group order, second largest prime or prime power factor of P-1 where P is any factor of the group order). That rectangle will map to many noncontiguous hills and valleys in your conventional search space. Quote:
In your space, a 128-bit prime number may be just 2 away from a number whose largest factor is, say, 11-bit. In mapping to P-1 space, those two numbers may be very far apart, or close together, depending on the factorizations of the numbers-1. Quote:
Last fiddled with by cheesehead on 2011-05-13 at 07:51 |
|||||||
|
|
|
|
|
#568 | |
|
"Richard B. Woods"
Aug 2002
Wisconsin USA
1E0C16 Posts |
Quote:
The problem is that in practice, for the size of numbers we're working with, we have to handle all the exponentiation in mod N. (N is often some Mersenne 2p-1). The same consideration applies to the squaring in the Lucas-Lehmer test in addition to the exponentiation in P-1. Without the modular reductions, we'd quickly (surprisingly quickly) be trying to store numbers which have more digits than the number of particles in the known universe. See the response to "Why not skip the modular reduction in the LL test and reuse residues?" on this page http://www.mersennewiki.org/index.ph...d_improvements and the second and third paragraphs (which include a calculation about the number of particles in the known universe) under "Simple Explanation" on this page http://www.mersennewiki.org/index.php/Lucas-Lehmer_Test Since we're therefore dealing with residues mod N, the ones we compute for one value of N are useless when computing for a different value of N. So it is necessary to start fresh each time ... unless one is continuing a computation for the same N, but higher limits! Therefore, saving the final residue from an unsuccessful P-1 is useful when another P-1 test is simply extending the limits used by the first P-1 test -- but note that here we're not changing N, which is why the earlier mod N result is useful. Furthermore (again, for the size of numbers we're dealing with), it's actually much faster to restart exponentiation afresh, with modular arithmetic, than it would be to "read" stored full non-modular products from wherever we managed to store them. (Consider that the size of "disk" you'd be reading from might need to be larger than the Milky Way galaxy.) Last fiddled with by cheesehead on 2011-05-13 at 08:01 |
|
|
|
|
|
|
#569 | |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
3·5·719 Posts |
Quote:
They have produce a number of exponential but fast algorithms for finding small factors, including P-1, P+1, SQUFOF, and Pollard rho. All of these algorithms are public and all are in daily use. Much of the advance in the last few decades can be ascribed to three people: John Pollard, Hendrik Lenstra and Peter Montgomery. Paul |
|
|
|
|
|
|
#570 | |
|
Dec 2010
Monticello
5×359 Posts |
Quote:
![]() In the P-1 operation, we do all the arithmetic modulo some N...I assume that N would be 2^P-1, with no other reasonable choices, right? (That is, some other choice, such as the product of the three mersenne exponents we wanted to test in parallel, would more than triple the compute time, and a product of, say, 10, wouldn't allow testing of additional mersenne exponents beyond the 10 listed) I'm thinking at this point that the only obvious optimization of P-1 on a GPU would be to run exponents in parallel, to the size of available memory, thus maximising usage of the many compute cores in the GPU. |
|
|
|
|
|
|
#571 | |
|
Jun 2003
117368 Posts |
Quote:
Stage 2 of a single exponent can also be parallelised. |
|
|
|
|
|
|
#572 | |||||
|
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
3·5·313 Posts |
Quote:
Quote:
Quote:
Quote:
Quote:
|
|||||
|
|
|