![]() |
[QUOTE=Mr. P-1;288369]To do a stage 1 P-1, you start by choosing a composite number E then compute 3[sup]E[/sup]-1. The method will then find a factor p if E is divisible by p-1. This will work for any E, not just smooth E. So for example if we chose E=66=2*3*11, then we would find the factor 23, but not the factor 19, even though 19-1 is smother than 23-1.
The thing to realize is that the smoothness 'rule' isn't something inherent in the algorithm itself. Rather, it arises out of the optimisation of the algorithm. Every E power-smooth to some B1 is optimal in the sense that all other choices are either less likely to find a factor, more expensive to compute, or both. Stage 2 computes 3[sup]E*q[/sup] for a range of different prime q and will find a factor p if any E*q is divisible by p-1. Again, the algorithm will work (in the sense that it will find these factors) for an arbitrary set of primes q, but it's only worth doing if it can be implemented efficiently, which it can on large blocks of consecutive primes. The "standard" P-1 then, chooses two bound, B1 and B2, computes E power-smooth to B1 for stage 1, and uses all primes q between B1 and B2 for stage 2. Variants on this are worth considering, for example one might compute E smooth to B1 but power-smooth to B2, or one might run stage 2 from 2 to B2 instead of from B1 to B2. The examples supplied above by axn and fivemack suggest that gmp-ecm does both.[/QUOTE] It would be very helpful to see a small, paper-and-pencil example of the P-1 algorithm, if someone would be willing to work through one. |
[QUOTE=c10ck3r;288399]For the P-1 method, does the number have to be taken mod 2^P-1 each step, or could we compute, say, 3^50000, take that mod 2^P-1, and take it to the P power?[/QUOTE]
Bear in mind that a standard stage 1 with B1=13 would involve computing 3^360360-1. For the values of B1 we use in practice, typically > 100,000, computing the power without modular reduction at each step is completely infeasible. |
[QUOTE=NBtarheel_33;288406]It would be very helpful to see a small, paper-and-pencil example of the P-1 algorithm, if someone would be willing to work through one.[/QUOTE]
I'm not willing to do a literal paper-and-pencil example but perhaps this will help. Take E=66 as in my example above. 3^66-1 = 30903154382632612361920641803528 is divisible by 23. If N is also divisible by 23 then GCD(N,3^66-1) cannot be 1. If we're unlucky then this GCD will be just N and we will have failed to find a factor. More likely it will be 23 or some multiple, in which case we (partially) factorise N. |
Question about P-1
My laptop is actually running P-1 tests on M54,xxx,xxx on the first thread and ECM tests on small Mersenne numbers on the second one.
I gave Prime95 1.5 GB of memory. As the laptop is active only 8 hours/day, it may happen that: 1 - Thread 1 starts Stage 1 with 1.5 GB of memory 2 - Thread 2 starts Stage 2 with 760 MB, and thread 1 is lowered to 760 MB 3 - Thread 2 finishes Stage 2 4 - Thread 1 regains 1.5 GB of memory 5 - Go to 2 a) How will Stage 2 of P-1 be affected from this change of memory allotments? b) Is the algorithm still affordable? c) At the end of P-1 Stage 2, how will be the calculation credited by PrimeNet? With higher or lower B2 bounds? Luigi |
[QUOTE=ET_;288443]I gave Prime95 1.5 GB of memory.[/QUOTE]Prime95 will determine optimal bounds under the assumption that it will have access to the full amount of the allocated RAM for all of stage2. This logic works well for the standard model where P-1 is a small part at the start of a L-L test. For dedicated P-1 (or ECM) work, it would be best to put specific allocations in [i]local.txt[/i]:[code][Worker #1]
Memory=800 [Worker #2] Memory=700[/code]Numbers for example only, you can decide if you want equal or unequal distribution, whatever works best for your P-1 + ECM setup. |
[QUOTE=James Heinrich;288444]Prime95 will determine optimal bounds under the assumption that it will have access to the full amount of the allocated RAM for all of stage2. This logic works well for the standard model where P-1 is a small part at the start of a L-L test. For dedicated P-1 (or ECM) work, it would be best to put specific allocations in [i]local.txt[/i]:[code][Worker #1]
Memory=800 [Worker #2] Memory=700[/code]Numbers for example only, you can decide if you want equal or unequal distribution, whatever works best for your P-1 + ECM setup.[/QUOTE] Thank you James, this makes sense. :smile: I was worrying about the change of B2 settings during the calculation, and wondering which one will be chosen by PrimeNet, and how the sudden B2 change affects the calculation. I will stick to fixed bounds for now, but I'm still curious about the questions... Luigi |
[QUOTE=ET_;288448]Thank you James, this makes sense. :smile:
I was worrying about the change of B2 settings during the calculation, and wondering which one will be chosen by PrimeNet, and how the sudden B2 change affects the calculation. I will stick to fixed bounds for now, but I'm still curious about the questions... Luigi[/QUOTE] There should be no change in B2, it would just take longer to run S2 with the lower amount of memory available. |
[QUOTE=ET_;288443]a) How will Stage 2 of P-1 be affected from this change of memory allotments?
b) Is the algorithm still affordable? c) At the end of P-1 Stage 2, how will be the calculation credited by PrimeNet? With higher or lower B2 bounds?[/QUOTE]Bounds are calculated for optimal efficiency, under the assumption that stage2 will have access to the full amount of RAM that you've configured for Prime95. To answer your questions directly: a) The bounds used will be as calculated, runtime may be a little longer with less-than-full memory available than if all memory was available to the single instance. b) Prime95 may have chosen different (smaller) bounds if it knew it would have less RAM available to maintain optimal balance between runtime and chance of factor, but the difference should be relatively small. And indeed, some of the time it will likely have the full amount of RAM available. c) You always get credit for what you ran. Once the bounds are chosen, that's what's used to run the P-1 and that's what you get credit for. The amount of RAM just affects how efficiently stage2 can run, therefore how long it will take to run, therefore how large the bounds should be to obtain optimal balance between runtime and probability. |
OK, now I think I have understood. :smile:
Thank you everybody! Luigi |
Low candidates available to P-1...
Just so you all know, there are a batch (347) of candidates available to P-1 in the 52M range from [URL="http://www.gpu72.com/reports/available/nop-1/"]GPU to 72[/URL] ("Not just for GPUs any more").
And if anyone is not yet working with the sub-project, please consider joining. (Hint, hint Mr. P-1... :smile:) |
A little pedantry :-)
As always in discussions (as above) of the P-1 bounds-choosing algorithm, recall that prime95 invokes it only for "Pfactor=" (or implicitly for "Doublecheck=" or "Test="), not for "Pminus1=" where the bounds are explicitly specified by the user.
|
| All times are UTC. The time now is 22:58. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.