View Single Post
Old 2019-11-21, 00:18   #20
kriesel's Avatar
Mar 2017
US midwest

2·3·52·72 Posts
Default What's a good P-1 factoring strategy? Best?

Multiple P-1 applications offer the user the option of controlling the P-1 factoring bounds, or require their input, on the command line or in worktodo records or both. What inputs should a user use, depends on the goal. For the purpose of this post, the goal is to maximize time saved on the average, over many exponents, in searching for a new Mersenne prime. Finding factors is nice, but it is not the goal.

Several strategies have been discussed on some forum threads. Some advocate testing with 1-test-saved bounds. Others advocate quickly testing many exponents with very small bounds, claiming it will save time by eliminating some exponents quickly, before running a lengthier factoring with larger bounds. There are multiple bounds given on the page for a single exponent. What saves the most time in the long run has been unclear, and to my knowledge, not well tested and documented.

I chose two exponents to repeatedly P-1 factor with different bounds. The exponents were chosen to represent near-future P-1 GIMPS wavefront factoring, and 100Mdigit exponents.

Several sets of bounds were used to determine approximately where the number of exponents factored per day is maximized, under the condition that B2 is twice B1. This fits well with gpuowl's minimum bounds having that relationship. Additionally, the tabulated gpu72 bounds, PrimeNet bounds, and 1-test-saved and 2-tests saved bounds were run. The odds of finding a factor were computed for each exponent and bounds combination.

The probable time expenditure in P-1 factoring was computed from the actual logged run times and the odds of finding factors in each run. All cases were premised on trial factoring to gpu72 limits first.

Total number of runs was 12 for the 102M exponent, and 14 for the 333M exponent. This took an RX480 almost ten days to complete, using the then-current commit of gpuowl.

The actual run times and odds of finding a factor for the various cases were combined for 7 different scenarios and 3 GIMPS cases as applicable.
The scenarios are:
  1. Always use 2-tests-saved bounds initially
  2. Use highest factors/day bounds first, use 2-tests bounds on survivors
  3. Use 1-test-saved bounds, PrimeNet reissues for >=2-tests-saved bounds
  4. Use GPU72 bounds, PrimeNet reissues for >=2-tests-saved bounds
  5. Always use 1-test-saved bounds initially
  6. Use highest factors/day bounds first, use 1-test bounds on survivors
  7. Use PrimeNet bounds
The GIMPS cases are:
A: No further LL is done, and a single PRP returning composite is regarded as definitive (eg with proof generation & certification), so future work would consist entirely of single primality tests per exponent. This is a hypothetical or future case. There is still a lot of LL first test, or PRP without proof, being performed. A single LL or PRP (without proof generation) does not address the issue of honest error or false reports.
B: All future primality tests will be double checked, whether LL or PRP, so future work would consist entirely of double primality tests per exponent. This is I think the closest to the 2020 situation for GIMPS. (Only about 1/4 of primality test results were PRP with proof in September 2020.)
C: An equal probability of single or double primality tests going forward, the midpoint case between A and B. This would correspond to PRP being single tested and LL double tested and each occurring at about the 2020 rate.

There are a few scenario and case combinations that don't make sense. This leaves 17 combinations for each exponent.

The optimal time savings for the GIMPS cases were computed to be:
A: All single primality tests: use the 1-test-saved bounds

B: The 2020 situation, 2 primality tests: use the PrimeNet bounds

C: Equal probability of single or double primality tests: use the 1-test-saved bounds

The max-factors/day first scenario was observed to be less effective in all cases, both exponents. Possibly it would do somewhat better if run with a different B2/B1 ratio. The other extreme is the ratio of bounds for the 1-test-saved, 2-test-saved, gpu72, or PrimeNet scenarios, which is typically about 20 to 30.

I concluded that the proper P-1 bounds to use now in Gpuowl are the larger (formerly PrimeNet, now GPU72 row) bounds, immediately (only). Without any prior P-1; no 1-test-saved first, no max factors/day low bounds first. For example,

Note, that these test runs were made before the latest performance advances in Gpuowl.

The actual bounds, odds, and calculations are documented in the attached pdf.

After these tests were done, the PrimeNet and gpu72 bounds on were revised. Formerly the PrimeNet bounds were higher for a given exponent. After the revision the gpu72 line bounds are higher. After PRP with proof adoption is substantially complete, downward adjustment of bounds is likely.

See also M344587487's post re efficiently and effectively performing P-1:

All the preceding was written before the enhanced efficiency of P-1 factoring stage 2 in prime95/mprime v30.8. This makes prime95 preferable for P-1 when sufficient memory is available. Most of the preceding still applies regardless of software used.

Top of this reference thread:
Top of reference tree:
Attached Files
File Type: pdf P-1 bounds selection in Gpuowl.pdf (25.3 KB, 351 views)

Last fiddled with by kriesel on 2022-11-16 at 01:19 Reason: added referral to prime95 v30.8 or later
kriesel is offline