mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Data (https://www.mersenneforum.org/forumdisplay.php?f=21)
-   -   COMPLETE!!!! Thinking out loud about getting under 20M unfactored exponents (https://www.mersenneforum.org/showthread.php?t=22476)

 chalsall 2021-04-21 16:21

[QUOTE=petrw1;576382]At the present time -Anon- is TF'ing 31.5M. Though he may pause if he reads this thread[/QUOTE]

I got the impression he had stopped at 75. But, good point; that range can be cleared by going to 76.

 axn 2021-04-21 16:56

1 Attachment(s)
The attached file has 153 assignments in the range 31.50-31.52, the first four of which are:
[CODE]Pplus1=1,2,31500281,-1,300000,0,1,75
Pplus1=1,2,31500361,-1,400000,0,1,75
Pplus1=1,2,31500457,-1,500000,0,1,75
Pplus1=1,2,31500767,-1,600000,0,1,75
[/CODE]
I have excluded those that have been TF'ed to 76 bits, as well as those that have been relatively poorly P-1'ed (the thinking being, those exponents might be better off being P-1'ed deeper rather than P+1).

I don't know what are optimal parameters, instead I have gone for a range of B1 values from 300k-600k. I am also relying on P95 to compute optimal B2, hence B2 is set as 0.

If you can run these first four assignment and report back on B2 selection, run times (B1/B2 splits) and probabilities, we can then pick the optimal parameters.

Choice of B1 was based on looking at the current state of P-1 done in the range and then picking equal or lesser values (since P+1 stage 1 is about half as fast as P-1).

 axn 2021-04-21 16:58

[QUOTE=chalsall;576385]I got the impression he had stopped at 75. But, good point; that range can be cleared by going to 76.

Oops, didn't see it. Ok, I will whip up something in that range. But the 31.5 is still there if you need.

 axn 2021-04-21 17:06

42.6 range P+1 assignments

1 Attachment(s)
Ok. Take 2. Attached file has about 300 assignments in the 42.60-42.62. First five are:
[CODE]Pplus1=1,2,42600139,-1,400000,0,1,75
Pplus1=1,2,42600221,-1,500000,0,1,75
Pplus1=1,2,42600289,-1,600000,0,1,75
Pplus1=1,2,42600367,-1,700000,0,1,75
Pplus1=1,2,42600379,-1,800000,0,1,75
[/CODE]
If you can run these first five assignments and report back on B2 selection, run times (B1/B2 splits) and probabilities, we can then optimize the parameters for the rest of the runs.

 Prime95 2021-04-21 17:16

[QUOTE=petrw1;576359]George: Your posts talks about choosing B1 based on the current ECM B1.
For my case do you have any recommendations for P+1 for choosing B1 based on how much P-1 has already been done?

I am tempted to assume that P+1 will find different factors than P-1 which makes me hopeful even smallish B1/B2 will have reasonable success? Am I totally out to lunch?[/QUOTE]

We're learning about best bounds selection together.

I suggest picking one exponent and try a B1/B2 combination -- plus specify the TF bit level. Start prime95 and it will tell you the chance of finding a factor. Abort, select different B1/B2, run prime95 and look at the chance of finding a factor. Repeat until you have a decent idea as to how bounds correlate with probability. Choose B2 somewhere between 20 and 80 times B1.

Are you out to lunch? Somewhat. P+1 will find different factors than P-1, but is [B]vastly[/B] inferior to P-1. The loss of the "free 2*p" in B1/B2 smoothness for Mersenne numbers where factors are known to be of the form 2*k*p+1 is huge. Add on to that the 50% chance that P+1 won't find the factor even if it is B1/B2 smooth. Depending on TF levels, you're probably looking at 100-300 P+1 runs to find a single factor.

I fear that for exponents above 20M you're better off extending P-1 bounds rather than doing P+1. As you gather data, you may prove my fear wrong.

 chalsall 2021-04-21 17:23

[QUOTE=axn;576391]If you can run these first five assignments and report back on B2 selection, run times (B1/B2 splits) and probabilities, we can then optimize the parameters for the rest of the runs.[/QUOTE]

OK. Thanks a lot! The first five are running now.

Seems like there's still some Primenet work to do though... :wink:

[CODE][Main thread Apr 21 17:19] Mersenne number primality test program version 30.6
[Main thread Apr 21 17:19] Optimizing for CPU architecture: Core i3/i5/i7, L2 cache size: 4x1 MB, L3 cache size: 33 MB
[Main thread Apr 21 17:19] Starting worker.
[Comm thread Apr 21 17:19] Registering assignment: P+1 M42600221
[Comm thread Apr 21 17:19] PrimeNet error 44: Invalid assignment type
[Comm thread Apr 21 17:19] ra: unsupported assignment work type: 6
[Work thread Apr 21 17:19] Worker starting
[Work thread Apr 21 17:19] Setting affinity to run worker on CPU core #1
[Work thread Apr 21 17:19] P+1 on M42600221 with B1=500000, B2=TBD
[Work thread Apr 21 17:19] Setting affinity to run helper thread 2 on CPU core #3
[Work thread Apr 21 17:19] Using AVX-512 FFT length 2240K, Pass1=128, Pass2=17920, clm=4, 4 threads
[Work thread Apr 21 17:19] Setting affinity to run helper thread 3 on CPU core #4
[Work thread Apr 21 17:19] Setting affinity to run helper thread 1 on CPU core #2
[Comm thread Apr 21 17:19] Done communicating with server.
[Work thread Apr 21 17:20] M42600221 stage 1 is 0.78% complete. Time: 35.714 sec.
[Work thread Apr 21 17:20] M42600221 stage 1 is 1.71% complete. Time: 35.840 sec.
[Work thread Apr 21 17:21] M42600221 stage 1 is 2.63% complete. Time: 36.335 sec.
[/CODE]

 Prime95 2021-04-21 17:24

We do know that P+1 is better than ECM -- roughly the same chance of success but several times faster.

We do know that TF becomes more and more expensive the smaller the exponent. We also know that deep P-1 has been done on almost all "small" exponents.

For the PRP-CF and this 20M project, a coordinated P+1 attack on exponents below say 5 or 10 million should be worthwhile. It might be nice to start a new thread to do the coordination and reach a consensus on the target B1 for the various exponent ranges.

 Prime95 2021-04-21 17:26

[QUOTE=chalsall;576396]
Seems like there's still some Primenet work to do though... :wink:[/QUOTE]

Primenet accepts P+1 results, but does not recognize/coordinate P+1 assignments.

 axn 2021-04-21 17:35

[QUOTE=chalsall;576396]OK. Thanks a lot! The first five are running now.

Seems like there's still some Primenet work to do though... :wink:
[/QUOTE]

[QUOTE=Prime95;576398]Primenet accepts P+1 results, but does not recognize/coordinate P+1 assignments.[/QUOTE]

Ok, I guess we need to add N/A at the beginning to avoid assignment registration attempt, right? (or UsePrimenet=0 / manually report)

 James Heinrich 2021-04-21 18:00

This is crude, but I have created a placeholder page to list what known P+1 efforts have been recorded:
[url]https://www.mersenne.ca/pplus1.php[/url]

Note of course that my data will always be up to 24h out of date (synch'ed just after midnight UTC).
Will get a better report on mersenne.org (at least if George can email me where to find the P+1 data).

 petrw1 2021-04-21 18:51

[QUOTE=Prime95;576394]
Are you out to lunch? Somewhat. P+1 will find different factors than P-1, but is [B]vastly[/B] inferior to P-1. The loss of the "free 2*p" in B1/B2 smoothness for Mersenne numbers where factors are known to be of the form 2*k*p+1 is huge. Add on to that the 50% chance that P+1 won't find the factor even if it is B1/B2 smooth. Depending on TF levels, you're probably looking at 100-300 P+1 runs to find a single factor.

I fear that for exponents above 20M you're better off extending P-1 bounds rather than doing P+1. As you gather data, you may prove my fear wrong.[/QUOTE]

Yes seems P-1 is still better. Though the odds of finding a factor per assignment is dropping in these stubborn ranges its still in the 50-80 range.

 chalsall 2021-04-21 19:02

[QUOTE=Prime95;576398]Primenet accepts P+1 results, but does not recognize/coordinate P+1 assignments.[/QUOTE]

OK... Here are the log entries from my fastest machine for those who understand the maths behind all this. The probability percentage is indeed low... This is for [URL="https://www.mersenne.org/report_exponent/?exp_lo=42600139&full=1"]42600139[/URL].

[CODE]
[Main thread Apr 21 13:11] Mersenne number primality test program version 30.6
[Main thread Apr 21 13:11] Optimizing for CPU architecture: Core i3/i5/i7, L2 cache size: 6x256 KB, L3 cache size: 12 MB
[Main thread Apr 21 13:11] Starting worker.
[Comm thread Apr 21 13:11] Updating computer information on the server
[Work thread Apr 21 13:11] Worker starting
[Work thread Apr 21 13:11] Setting affinity to run worker on CPU core #1
[Work thread Apr 21 13:11] P+1 on M42600139 with B1=400000, B2=TBD
[Work thread Apr 21 13:12] Setting affinity to run helper thread 5 on CPU core #6
[Work thread Apr 21 13:12] Setting affinity to run helper thread 4 on CPU core #5
[Work thread Apr 21 13:12] Setting affinity to run helper thread 3 on CPU core #4
[Work thread Apr 21 13:12] Using FMA3 FFT length 2304K, Pass1=384, Pass2=6K, clm=2, 6 threads
[Work thread Apr 21 13:12] Setting affinity to run helper thread 2 on CPU core #3
[Work thread Apr 21 13:12] Setting affinity to run helper thread 1 on CPU core #2
[Work thread Apr 21 13:12] M42600139 stage 1 is 1.00% complete. Time: 42.031 sec.
[Work thread Apr 21 13:13] M42600139 stage 1 is 2.16% complete. Time: 42.031 sec.
[Work thread Apr 21 13:14] M42600139 stage 1 is 3.31% complete. Time: 42.133 sec.

[Work thread Apr 21 14:11] M42600139 stage 1 is 96.63% complete. Time: 42.167 sec.
[Work thread Apr 21 14:11] M42600139 stage 1 is 97.80% complete. Time: 42.217 sec.
[Work thread Apr 21 14:12] M42600139 stage 1 is 98.90% complete. Time: 42.254 sec.
[Work thread Apr 21 14:13] M42600139 stage 1 complete. 1020144 transforms. Time: 2152.052 sec.
[Work thread Apr 21 14:13] Stage 1 GCD complete. Time: 8.352 sec.
[Work thread Apr 21 14:13] With trial factoring done to 2^75, optimal B2 is 42*B1 = 16800000.
[Work thread Apr 21 14:13] Chance of a new factor assuming no ECM has been done is 0.274%
[Work thread Apr 21 14:13] D: 630, relative primes: 1439, stage 2 primes: 1045406, pair%=94.24
[Work thread Apr 21 14:13] Using 26374MB of memory.
[Work thread Apr 21 14:13] Stage 2 init complete. 4243 transforms. Time: 13.237 sec.
[Work thread Apr 21 14:14] M42600139 stage 2 is 1.67% complete. Time: 35.680 sec.
[Work thread Apr 21 14:14] M42600139 stage 2 is 3.35% complete. Time: 35.685 sec.
[Work thread Apr 21 14:15] M42600139 stage 2 is 5.05% complete. Time: 35.688 sec.

[Work thread Apr 21 14:46] M42600139 stage 2 is 95.36% complete. Time: 35.681 sec.
[Work thread Apr 21 14:47] M42600139 stage 2 is 97.08% complete. Time: 35.682 sec.
[Work thread Apr 21 14:47] M42600139 stage 2 is 98.79% complete. Time: 35.678 sec.
[Work thread Apr 21 14:48] M42600139 stage 2 complete. 1154122 transforms. Time: 2060.598 sec.
[Work thread Apr 21 14:48] Stage 2 GCD complete. Time: 8.313 sec.
[Work thread Apr 21 14:48] M42600139 completed P+1, B1=400000, B2=16800000, Wi8: 4F3C1C20
[Comm thread Apr 21 14:48] Sending result to server: UID: wabbit/rdt1, M42600139 completed P+1, B1=400000, B2=16800000, Wi8: 4F3C1C20
[/CODE]

 firejuggler 2021-04-21 19:17

So, 1H30 for a 0.274% chance of finding a factor?
Should one run a normal P+1 (3 tries) , 5 H for a less than a percent chance of finding a factor?

It is indeed very low.

 petrw1 2021-04-21 19:29

And unless P+1 takes into account how much P-1 has already been done it will be lower yet.
Or is prior P-1 not relevant to the success rate if P+1.

 Prime95 2021-04-21 19:33

[QUOTE=petrw1;576417]And unless P+1 takes into account how much P-1 has already been done it will be lower yet.
Or is prior P-1 not relevant to the success rate if P+1.[/QUOTE]

P-1 and P+1 search space is almost completely independent.

 Prime95 2021-04-21 19:35

[QUOTE=firejuggler;576416]So, 1H30 for a 0.274% chance of finding a factor?
Should one run a normal P+1 (3 tries) , 5 H for a less than a percent chance of finding a factor?

It is indeed very low.[/QUOTE]

You'll get better chances with smaller exponents -- less TF has been done. I was getting 1+% estimates in the 4.7M range with B1=1000000.

 Prime95 2021-04-21 20:00

[QUOTE=Prime95;576394]
I suggest picking one exponent and try a B1/B2 combination -- plus specify the TF bit level. Start prime95 and it will tell you the chance of finding a factor. Abort, select different B1/B2, run prime95 and look at the chance of finding a factor. Repeat until you have a decent idea as to how bounds correlate with probability. [/QUOTE]

You also need to clear the Pplus1BestB2 option to get the quick probability at startup.

Now that I've fixed the crash bug reading stage 2 save file, I'll gather some of this data and post it here.

 Prime95 2021-04-21 20:11

31M expo, TF'ed to 2^75

[CODE][Apr 21 16:05] P+1 on M31500457 with B1=250000, B2=5000000
[Apr 21 16:05] Chance of finding a factor assuming no ECM has been done is an estimated 0.16%

[Apr 21 16:06] P+1 on M31500457 with B1=250000, B2=10000000
[Apr 21 16:06] Chance of finding a factor assuming no ECM has been done is an estimated 0.194%

[Apr 21 16:06] P+1 on M31500457 with B1=250000, B2=20000000
[Apr 21 16:06] Chance of finding a factor assuming no ECM has been done is an estimated 0.232%

[Apr 21 16:06] P+1 on M31500457 with B1=500000, B2=10000000
[Apr 21 16:06] Chance of finding a factor assuming no ECM has been done is an estimated 0.261%

[Apr 21 16:07] P+1 on M31500457 with B1=500000, B2=20000000
[Apr 21 16:07] Chance of finding a factor assuming no ECM has been done is an estimated 0.313%

[Apr 21 16:07] P+1 on M31500457 with B1=500000, B2=40000000
[Apr 21 16:07] Chance of finding a factor assuming no ECM has been done is an estimated 0.37%

[Apr 21 16:08] P+1 on M31500457 with B1=1000000, B2=20000000
[Apr 21 16:08] Chance of finding a factor assuming no ECM has been done is an estimated 0.399%

[Apr 21 16:08] P+1 on M31500457 with B1=1000000, B2=40000000
[Apr 21 16:08] Chance of finding a factor assuming no ECM has been done is an estimated 0.475%

[Apr 21 16:08] P+1 on M31500457 with B1=1000000, B2=80000000
[Apr 21 16:08] Chance of finding a factor assuming no ECM has been done is an estimated 0.556%
[/CODE]

 chalsall 2021-04-21 20:12

Initial emprical data from five runs...

These next four runs (with different B1s as given by axn) were run on GCE 8 vcore instances with 30G of RAM available to them:

[CODE]
[Work thread Apr 21 17:19] P+1 on M42600221 with B1=500000, B2=TBD
[Work thread Apr 21 17:19] Setting affinity to run helper thread 2 on CPU core #3
[Work thread Apr 21 17:19] Using AVX-512 FFT length 2240K, Pass1=128, Pass2=17920, clm=4, 4 threads
[Work thread Apr 21 17:19] Setting affinity to run helper thread 3 on CPU core #4
[Work thread Apr 21 17:19] Setting affinity to run helper thread 1 on CPU core #2
[Work thread Apr 21 17:20] M42600221 stage 1 is 0.78% complete. Time: 35.714 sec.

[Work thread Apr 21 18:28] M42600221 stage 1 complete. 2177465 transforms. Time: 4116.695 sec.
[Work thread Apr 21 18:28] Stage 1 GCD complete. Time: 11.694 sec.
[Work thread Apr 21 18:28] With trial factoring done to 2^75, optimal B2 is 44*B1 = 22000000.
[Work thread Apr 21 18:28] Chance of a new factor assuming no ECM has been done is 0.321%
[Work thread Apr 21 18:28] D: 630, relative primes: 1683, stage 2 primes: 1347723, pair%=95.30
[Work thread Apr 21 18:28] Using 29991MB of memory.
[Work thread Apr 21 18:28] Stage 2 init complete. 4961 transforms. Time: 21.750 sec.
[Work thread Apr 21 18:29] M42600221 stage 2 is 1.30% complete. Time: 33.963 sec.

[Work thread Apr 21 19:08] M42600221 stage 2 is 99.03% complete. Time: 32.330 sec.
[Work thread Apr 21 19:09] M42600221 stage 2 complete. 1474580 transforms. Time: 2416.992 sec.
[Work thread Apr 21 19:09] Stage 2 GCD complete. Time: 11.498 sec.
[Work thread Apr 21 19:09] M42600221 completed P+1, B1=500000, B2=22000000, Wi8: 4F793D3E
[Comm thread Apr 21 19:09] Sending result to server: UID: ***/GCE_2, M42600221 completed P+1, B1=500000, B2=22000000, Wi8: 4F793D3E

[Work thread Apr 21 17:19] P+1 on M42600289 with B1=600000, B2=TBD
[Work thread Apr 21 17:20] M42600289 stage 1 is 0.64% complete. Time: 36.778 sec.

[Work thread Apr 21 18:40] M42600289 stage 1 is 99.56% complete. Time: 37.153 sec.
[Work thread Apr 21 18:40] M42600289 stage 1 complete. 2614375 transforms. Time: 4828.669 sec.
[Work thread Apr 21 18:40] Stage 1 GCD complete. Time: 11.328 sec.
[Work thread Apr 21 18:40] With trial factoring done to 2^75, optimal B2 is 46*B1 = 27600000.
[Work thread Apr 21 18:40] Chance of a new factor assuming no ECM has been done is 0.364%
[Work thread Apr 21 18:40] D: 630, relative primes: 1683, stage 2 primes: 1669036, pair%=95.27
[Work thread Apr 21 18:40] Using 29992MB of memory.
[Work thread Apr 21 18:40] Stage 2 init complete. 4961 transforms. Time: 20.257 sec.
[Work thread Apr 21 18:41] M42600289 stage 2 is 1.05% complete. Time: 31.121 sec.

[Work thread Apr 21 19:28] M42600289 stage 2 is 99.59% complete. Time: 31.242 sec.
[Work thread Apr 21 19:28] M42600289 stage 2 complete. 1827707 transforms. Time: 2857.538 sec.
[Work thread Apr 21 19:28] Stage 2 GCD complete. Time: 11.302 sec.
[Work thread Apr 21 19:28] M42600289 completed P+1, B1=600000, B2=27600000, Wi8: 4FBB23DF
[Comm thread Apr 21 19:28] Sending result to server: UID: ***/GCE_1, M42600289 completed P+1, B1=600000, B2=27600000, Wi8: 4FBB23DF

[Work thread Apr 21 17:20] P+1 on M42600367 with B1=700000, B2=TBD
[Work thread Apr 21 17:20] M42600367 stage 1 is 0.53% complete. Time: 36.484 sec.

[Work thread Apr 21 18:53] M42600367 stage 1 is 99.83% complete. Time: 36.878 sec.
[Work thread Apr 21 18:53] M42600367 stage 1 complete. 3048479 transforms. Time: 5579.300 sec.
[Work thread Apr 21 18:53] Stage 1 GCD complete. Time: 11.253 sec.
[Work thread Apr 21 18:53] With trial factoring done to 2^75, optimal B2 is 47*B1 = 32900000.
[Work thread Apr 21 18:53] Chance of a new factor assuming no ECM has been done is 0.401%
[Work thread Apr 21 18:53] D: 630, relative primes: 1683, stage 2 primes: 1969321, pair%=95.19
[Work thread Apr 21 18:53] Using 29994MB of memory.
[Work thread Apr 21 18:54] M42600367 stage 2 is 0.89% complete. Time: 31.187 sec.

[Work thread Apr 21 19:49] M42600367 stage 2 is 99.13% complete. Time: 31.257 sec.
[Work thread Apr 21 19:50] M42600367 stage 2 complete. 2158931 transforms. Time: 3377.253 sec.
[Work thread Apr 21 19:50] Stage 2 GCD complete. Time: 11.398 sec.
[Work thread Apr 21 19:50] M42600367 completed P+1, B1=700000, B2=32900000, Wi8: 4C34DDA9
[Comm thread Apr 21 19:50] Sending result to server: UID: ***/GCE_3, M42600367 completed P+1, B1=700000, B2=32900000, Wi8: 4C34DDA9

[Work thread Apr 21 17:20] P+1 on M42600379 with B1=800000, B2=TBD
[Work thread Apr 21 17:21] M42600379 stage 1 is 0.46% complete. Time: 34.118 sec.

[Work thread Apr 21 18:59] M42600379 stage 1 is 99.95% complete. Time: 34.114 sec.
[Work thread Apr 21 18:59] M42600379 stage 1 complete. 3486053 transforms. Time: 5931.664 sec.
[Work thread Apr 21 18:59] Stage 1 GCD complete. Time: 10.011 sec.
[Work thread Apr 21 18:59] With trial factoring done to 2^75, optimal B2 is 48*B1 = 38400000.
[Work thread Apr 21 18:59] Chance of a new factor assuming no ECM has been done is 0.437%
[Work thread Apr 21 18:59] D: 630, relative primes: 1683, stage 2 primes: 2278054, pair%=95.13
[Work thread Apr 21 18:59] Using 29996MB of memory.
[Work thread Apr 21 18:59] Stage 2 init complete. 4963 transforms. Time: 18.917 sec.
[Work thread Apr 21 19:00] M42600379 stage 2 is 0.77% complete. Time: 28.566 sec.

[Work thread Apr 21 19:58] M42600379 stage 2 is 99.22% complete. Time: 28.586 sec.
[Work thread Apr 21 19:59] M42600379 stage 2 complete. 2499826 transforms. Time: 3571.213 sec.
[Work thread Apr 21 19:59] Stage 2 GCD complete. Time: 9.984 sec.
[Work thread Apr 21 19:59] M42600379 completed P+1, B1=800000, B2=38400000, Wi8: 4C474AA4
[Comm thread Apr 21 19:59] Sending result to server: UID: xxx/GCE_4, M42600379 completed P+1, B1=800000, B2=38400000, Wi8: 4C474AA4
[/CODE]

 firejuggler 2021-04-21 20:17

Would the sub 100k exponent(those with no know factors) get any help with PP1 (those which were ecm-ed to T60 or above)?

 Prime95 2021-04-21 20:19

15.75M expo, 2^69 TF

[CODE][Apr 21 16:14] P+1 on M15750199 with B1=250000, B2=5000000
[Apr 21 16:14] Chance of finding a factor assuming no ECM has been done is an estimated 0.366%

[Apr 21 16:15] P+1 on M15750199 with B1=250000, B2=10000000
[Apr 21 16:15] Chance of finding a factor assuming no ECM has been done is an estimated 0.441%

[Apr 21 16:15] P+1 on M15750199 with B1=250000, B2=20000000
[Apr 21 16:15] Chance of finding a factor assuming no ECM has been done is an estimated 0.52%

[Apr 21 16:15] P+1 on M15750199 with B1=500000, B2=10000000
[Apr 21 16:15] Chance of finding a factor assuming no ECM has been done is an estimated 0.56%

[Apr 21 16:16] P+1 on M15750199 with B1=500000, B2=20000000
[Apr 21 16:16] Chance of finding a factor assuming no ECM has been done is an estimated 0.666%

[Apr 21 16:16] P+1 on M15750199 with B1=500000, B2=40000000
[Apr 21 16:16] Chance of finding a factor assuming no ECM has been done is an estimated 0.779%

[Apr 21 16:16] P+1 on M15750199 with B1=1000000, B2=20000000
[Apr 21 16:16] Chance of finding a factor assuming no ECM has been done is an estimated 0.811%

[Apr 21 16:17] P+1 on M15750199 with B1=1000000, B2=40000000
[Apr 21 16:17] Chance of finding a factor assuming no ECM has been done is an estimated 0.957%

[Apr 21 16:17] P+1 on M15750199 with B1=1000000, B2=80000000
[Apr 21 16:17] Chance of finding a factor assuming no ECM has been done is an estimated 1.11%

[/CODE]

 Prime95 2021-04-21 20:29

4.7M expo, TF to 2^68

On further thought exponent size does not affect probability calculations. Only TF,B1,B2.

[CODE][Apr 21 16:25] P+1 on M4715201 with B1=250000, B2=5000000
[Apr 21 16:25] Chance of finding a factor assuming no ECM has been done is an estimated 0.419%

[Apr 21 16:25] P+1 on M4715201 with B1=250000, B2=10000000
[Apr 21 16:25] Chance of finding a factor assuming no ECM has been done is an estimated 0.503%

[Apr 21 16:25] P+1 on M4715201 with B1=250000, B2=20000000
[Apr 21 16:25] Chance of finding a factor assuming no ECM has been done is an estimated 0.593%

[Apr 21 16:26] P+1 on M4715201 with B1=500000, B2=10000000
[Apr 21 16:26] Chance of finding a factor assuming no ECM has been done is an estimated 0.633%

[Apr 21 16:26] P+1 on M4715201 with B1=500000, B2=20000000
[Apr 21 16:26] Chance of finding a factor assuming no ECM has been done is an estimated 0.753%

[Apr 21 16:26] P+1 on M4715201 with B1=500000, B2=40000000
[Apr 21 16:26] Chance of finding a factor assuming no ECM has been done is an estimated 0.879%

[Apr 21 16:26] P+1 on M4715201 with B1=1000000, B2=20000000
[Apr 21 16:26] Chance of finding a factor assuming no ECM has been done is an estimated 0.91%

[Apr 21 16:26] P+1 on M4715201 with B1=1000000, B2=40000000
[Apr 21 16:26] Chance of finding a factor assuming no ECM has been done is an estimated 1.07%

[Apr 21 16:27] P+1 on M4715201 with B1=1000000, B2=80000000
[Apr 21 16:27] Chance of finding a factor assuming no ECM has been done is an estimated 1.24%
[/CODE]

 chalsall 2021-04-21 20:31

[QUOTE=Prime95;576430]On further thought exponent size does not affect probability calculations. Only TF,B1,B2.[/QUOTE]

Do you have any gut feeling for how accurate the estimates are?

 Prime95 2021-04-21 20:36

[QUOTE=firejuggler;576428]Would the sub 100k exponent(those with no know factors) get any help with PP1 (those which were ecm-ed to T60 or above)?[/QUOTE]

One P+1 run would be just like running one more ECM curve. The P+1 "curve" stands little chance of success just like running one ECM curve. But at least P+1 would be faster than the one ECM curve.

So, no, you're not likely to find a factor, but yes it is worth doing. Again, if the current ECM level is B1=44M then I'd do at least P+1 B1=500M to take advantage of P+1's faster stage 1. Also, stay away from the really small exponents (say sub-50K) where GMP-ECM with its FFT stage 2 would be the better choice.

 Prime95 2021-04-21 20:40

[QUOTE=chalsall;576431]Do you have any gut feeling for how accurate the estimates are?[/QUOTE]

Unless, I've misunderstood the math (not an insignificant possibility) or there is a bug, then the estimate should be spot on. I'm using Mihai's P-1 smoothness probability estimator. Whereas, P-1 get 20+ free bits of smoothness due to the known 2*p in factors, P+1 gets only a couple of free bits due to Peter Montgomery's ingenuity.

 petrw1 2021-04-21 21:05

I see these results

[CODE]Architects Cubed rdt1 42600139 NF-PP1 2021-04-21 18:53 0.0 3.2837 Start=2/7, B1=400000, B2=16800000
Architects Cubed GCE_2 42600221 NF-PP1 2021-04-21 19:09 0.0 4.0777 Start=2/7, B1=500000, B2=22000000
Architects Cubed GCE_1 42600289 NF-PP1 2021-04-21 19:28 0.0 5.0302 Start=2/7, B1=600000, B2=27600000
Architects Cubed GCE_3 42600367 NF-PP1 2021-04-21 19:50 0.0 5.9485 Start=2/7, B1=700000, B2=32900000
Architects Cubed GCE_4 42600379 NF-PP1 2021-04-21 19:59 0.0 6.8896 Start=2/7, B1=800000, B2=38400000[/CODE]

 James Heinrich 2021-04-21 21:08

[QUOTE=James Heinrich;576405]This is crude, but I have created a placeholder page to list what known P+1 efforts have been recorded:
[url]https://www.mersenne.ca/pplus1.php[/url]

Note of course that my data will always be up to 24h out of date (synch'ed just after midnight UTC).
Will get a better report on mersenne.org (at least if George can email me where to find the P+1 data).[/QUOTE]There is now a version on mersenne.org with access to live data. Still needs to be prettied up with filtering parameters and such, but it's a start:
[url]https://www.mersenne.org/report_pplus1/[/url]

 kruoli 2021-04-21 21:49

Thanks for the page! :smile: Maybe you could add a NF/F column? I'm eager to see when the first new factor gets found with P+1.

 James Heinrich 2021-04-21 21:57

[QUOTE=kruoli;576442]Thanks for the page! :smile: Maybe you could add a NF/F column? I'm eager to see when the first new factor gets found with P+1.[/QUOTE]The mersenne.ca page includes factors, if any.
The mersenne.org data table George pointed me to does not include information about factors, I'll have ask him about that.

 chalsall 2021-04-22 00:13

[QUOTE=James Heinrich;576439]There is now a version on mersenne.org with access to live data. Still needs to be prettied up with filtering parameters and such, but it's a start:
[url]https://www.mersenne.org/report_pplus1/[/url][/QUOTE]

Coolness! Thanks!!!

And you know where this leads, don't you...?

You'll now have to do a cost/benefit analysis to determine where the economic cross-over points are between ECM, deep TF'ing, deep P-1'ing, and this new P+1!!! As a function of 0.1M range, please... :smile:

 James Heinrich 2021-04-22 00:19

[QUOTE=chalsall;576450]You'll now have to do a cost/benefit analysis to determine where the economic cross-over points are between ECM, deep TF'ing, deep P-1'ing, and this new P+1!!! As a function of 0.1M range, please... :smile:[/QUOTE]Easily done. P+1 has [I]never[/I] found a Mersenne factor, therefore optimal effort to spend on it is zero. :razz:

 petrw1 2021-04-22 00:47

[QUOTE=chalsall;576413]OK... Here are the log entries from my fastest machine for those who understand the maths behind all this. The probability percentage is indeed low... This is for [URL="https://www.mersenne.org/report_exponent/?exp_lo=42600139&full=1"]42600139[/URL].

[CODE]

[Work thread Apr 21 14:13] With trial factoring done to 2^75, optimal B2 is 42*B1 = 16800000.
[Work thread Apr 21 14:13] Chance of a new factor assuming no ECM has been done is 0.274%
[Comm thread Apr 21 14:48] Sending result to server: UID: wabbit/rdt1, M42600139 completed P+1, B1=400000, B2=16800000, Wi8: 4F3C1C20
[/CODE][/QUOTE]

If I have the gozintas correct then if you do all 2050 42.6M exponents with these bounds you should find 5 or 6 factors for 6,731 GhzDays. About 1,200 per factor.

There are only about 600 exponents left that I could still reasonably P-1 with aggressive bounds for an expected success rate of about 1.5% (9 factors) for about 5,000 GhzDays; about 550 per factor.
After that P-1 becomes very expensive with the success rate dropping to about 0.5%; and 1,700+ GhzDays per factor.

Comparatively to find the remaining 47 factors via TF would take about 800,000 GhzDays.

So...the only actual comparison I can do is from my own farm:

Option 1: If I do all the remaining with TF it would take 170 days for my 2080Ti.

Option 2: If I first complete the above 600 P-1 assignments that would take about 18 days on my 20 cores (5 PCs). 290 GhzDays/Day
I then TF for the remaining 38 factors in 109 days.
Option 2 total is 127 days.

Option 3: If I complete the P-1 in 18 days.
Then I do the P+1 (6,731 GhzDays) that is another 23 days.
Now if I do find 15 factors by P+/-1 the remaining 32 TF factors would take about 400,000 GhzDays or 85 calendar days.
The total in option 2 is 126 days.

So personally the P+1 benefit seems minimal.

However, if my farm had less CPUs or conversely if I had a 1080Ti instead of a 2080Ti the bottom lines would be quite different.

Does anyone see it different?

 VBCurtis 2021-04-22 02:31

[QUOTE=petrw1;576453]Does anyone see it different?[/QUOTE]

Seems your personal goals are limited by GPU time, so option 3 appears fastest- you can run CPU tasks and GPU tasks at the same time and finish in 85 days. But then, you could also add 10% to remaining P-1 bounds and maybe still find factors faster than TF, further reducing required GPU time.

 petrw1 2021-04-22 02:52

[QUOTE=VBCurtis;576460]Seems your personal goals are limited by GPU time, so option 3 appears fastest- you can run CPU tasks and GPU tasks at the same time and finish in 85 days. But then, you could also add 10% to remaining P-1 bounds and maybe still find factors faster than TF, further reducing required GPU time.[/QUOTE]

Makes sense.
At the present time there are more total contributions than CPU.
In that state it is better to NOT go to extreme P+/-1.
If GPU resources diminish then I will lean to MORE P+/-1.

 James Heinrich 2021-04-22 04:16

[QUOTE=kruoli;576442]Thanks for the page! :smile: Maybe you could add a NF/F column? I'm eager to see when the first new factor gets found with P+1.[/QUOTE]There is now a factor column.

 ATH 2021-04-22 07:01

[QUOTE=James Heinrich;576452]Easily done. P+1 has [I]never[/I] found a Mersenne factor, therefore optimal effort to spend on it is zero. :razz:[/QUOTE]

User "nordi" found the first factor: [M]287873[/M] : 167460871862758047584571103871

30 digits and 97.1 bits, nice. From the timestamps it was the 92nd reported curve, but he reported 52 more in the same batch after the factor, so really it was the first factor in 144 curves from all up to and including 2021-04-22 06:48:xx.

 axn 2021-04-22 10:08

Cunching the numbers on chalsall's GCE runs:
[CODE]6532 0.321 0.176913655848132
7685 0.364 0.170513988288874
8956 0.401 0.161188030370701
9502 0.437 0.165565144180173
[/CODE]
That is Runtime(s), Prob(%), Prob/Hr. As you can see, with increased B1, there is a slight drop in Prob/Hr metric, but it is fairly flat. That means we could probably go with higher B1/B2 and still be similarly productive. However, it is still very low - 1 factor every 600 hours on a 4 core GCE machine

One thing that puzzles me is the fairly low B2/B1 ratio. Given how much slower stage 1 is compared to P-1 stage 1, I would've expected it to be a lot higher. I would like to try out higher B2/B1 ratios (maybe 100x). Give me some time to get some probability estimates using the higher ratio. I have enough runtime data to accurately model Prob/Hr on that GCE machine -- I just need the probability figures.

 axn 2021-04-22 11:46

1 Attachment(s)
So after further crunching, I noticed that the GCE machines were producing inconsistent timings. After correcting for it, there was a much more pronounced loss of efficiency (prob/hr) as you go higher B1. So considering that, I think B1=600k is about as high as I would recommend.
The good news is that, I made a mistake in the previous assignment. I kept the TF depth as 75, when actual depth for 42.6 range should've been 74. That improves the probability a bit.

Keeping that in mind, here are the updated assignments. I am still leaving B2 selection up to P95. There was no improvement going to larger B2. I have removed the 5 that were completed.
The probability of success is about 1 factor in 500 hours of a 4 core GCE (give or take - there is natural variability in machine performance). It is up to you whether you consider this worthwhile.

 masser 2021-04-22 14:24

[QUOTE=VBCurtis;576460]Seems your personal goals are limited by GPU time, so option 3 appears fastest- you can run CPU tasks and GPU tasks at the same time and finish in 85 days. But then, you could also add 10% to remaining P-1 bounds and maybe still find factors faster than TF, further reducing required GPU time.[/QUOTE]

I second the recommendation to run CPU and GPU tasks simultaneously. Regarding the P+1, I see two arguments for it. First, in some domains, tasks that take 23 days to save 1 day are definitely valued. Second, P+1 is new for the project, so collecting some data at this stage could be very helpful to the project.

One more reminder about your estimates, although I'm sure you know this, each task is slightly parasitic to the other tasks in terms of lowering probabilities or number of factors found. For instance, TF will find factors that P+1 or P-1 might find. Of course, each method will be the quickest way to find a certain factor; too bad we don't know in advance.

 chalsall 2021-04-22 14:35

[QUOTE=axn;576477]The probability of success is about 1 factor in 500 hours of a 4 core GCE (give or take - there is natural variability in machine performance). It is up to you whether you consider this worthwhile.[/QUOTE]

Thank you for your efforts. But for my ranges, I think doing deep P-1'ing is more cost-effective.

 axn 2021-04-22 17:14

[QUOTE=chalsall;576495]Thank you for your efforts. But for my ranges, I think doing deep P-1'ing is more cost-effective.[/QUOTE]

Understood.

 lycorn 2021-04-23 08:36

I noticed some 69->70 bits TF work in the 16M range, so to avoid toe stepping please note that I have now taken the remaining 16.2x M range from 69 to 70 bits.

 lycorn 2021-04-23 13:55

Clarification: work still in progress. 2/3 days to go.

 petrw1 2021-04-24 17:47

April 24,2021 Update

22 more ranges cleared:
3.5, 3.8, 4.6, 5.0, 7.5, 8.0, 8.3,
10.3, 17.2, 17.8, 19.6, 23.1, 27.1, 28.3, 28.4, 28.7,
31.5, 34.7, 38,7. 40.1, 43.0, 43.3

TOTALS to date:
267 total ranges cleared or 53.72%
1,994 more factored (30,731)....55.64% total factored.

My current activity/status:
There are 3 ranges remaining in 4xM. HOWEVER ... they are stubborn; close to 50 each remaining.
There seems to be a reluctance (rightly so) to hit them with TF.
Therefore, I will try more P-1 and maybe some P+1 ([URL="https://www.mersenneforum.org/showpost.php?p=576305&postcount=150"]new feature here)[/URL] to try to get them under 40.
Then TF to 76 should do it.

I've started deep P-1 in 3xM from highest to lowest. 39.6 will be done P-1 in a week or so.
Then I'll move back to the 3 remaining 4xM for a month or two.

There continues to be good P1 and TF work in all remaining ranges below 30.0M.

Thanks all

 firejuggler 2021-04-27 21:51

So, about half the B1 of a pm1 run? no matter how 'high' the exponent is? ie 600k for my current 23.5M range?

 petrw1 2021-04-28 01:26

[QUOTE=firejuggler;577034]So, about half the B1 of a pm1 run? no matter how 'high' the exponent is? ie 600k for my current 23.5M range?[/QUOTE]

Based on what I've read and marginally understood it would seem that for exponents this high P+1 is not real effective (expected factor rate per GhzDay is quite a bit higher than P-1). It could be used as a last resort if decently aggressive P-1 and TF fall short.

 ATH 2021-04-28 01:53

Sorry I moved all our posts again to a new thread:

Somehow I thought it would fit in here, but we are just interrupting your group goals.

 petrw1 2021-04-28 01:57

[QUOTE=ATH;577070]Sorry I moved all our posts again to a new thread:

Somehow I thought it would fit in here, but we are just interrupting your group goals.[/QUOTE]

Don't worry about interrupting. It just might help.

 lycorn 2021-05-01 17:31

Update on ranges I´m working on:

16M range: taking the remaining exponents at 69 bits to 70 bits

26.95x xxx to 27M: from 70 to 71 bits (again, I was unable to reserve the range, if bothering someone, pls give me a shout and I´ll stop).

There are now just [B]eleven[/B] 1M ranges < 1000M with 21000 or more exponents to factor. I´ll keep on doing some work there.

 petrw1 2021-05-01 21:43

[QUOTE=lycorn;577380]

26.95x xxx to 27M: from 70 to 71 bits (again, I was unable to reserve the range, if bothering someone, pls give me a shout and I´ll stop)..[/QUOTE]

This shows none.
[url]https://www.mersenne.org/assignments/?exp_lo=26950000&exp_hi=27000000[/url]

 lycorn 2021-05-02 08:33

True. And that's the problem: the server, for some reason, won 't allow tf reservations in some ranges. But there may be people "informally" working there (read: doing unregistered work). That' s why I asked.

 axn 2021-05-02 09:56

There has been no activity (factors found and/or exponents TF'ed) in the last 60 days in that region: [url]https://www.mersenne.ca/status/tf/0/60/5/2690[/url]

 lycorn 2021-05-02 10:11

Good. Hopefully you'll see some over the next couple of days, if the Colab powers are in a good mood. 😏

 axn 2021-05-06 11:24

Stick a fork in it, 'cause ...

... 3.4 is done.

Got some help from ECM as the range is now being worked in GIMPS.

Once I'm done with the current batch of P-1 (another 6 days), I'll be moving on to 3.7

 petrw1 2021-05-06 14:53

[QUOTE=axn;577789]... 3.4 is done.

Got some help from ECM as the range is now being worked in GIMPS.

Once I'm done with the current batch of P-1 (another 6 days), I'll be moving on to 3.7[/QUOTE]

:tu:

 lycorn 2021-05-11 07:23

I´ve finished TFing the 16M range to 70 bits. 113 new factors found. The whole range is up for grabs, 21032 to go.
Planning to move to 11.9M shortly.
Still fiddling with 26.9, using Colab. Slow progress, as the availability of GPUs is not great.

 petrw1 2021-05-11 14:31

[QUOTE=lycorn;578191]I´ve finished TFing the 16M range to 70 bits. 113 new factors found. The whole range is up for grabs, 21032 to go.
Planning to move to 11.9M shortly.
Still fiddling with 26.9, using Colab. Slow progress, as the availability of GPUs is not great.[/QUOTE]

The ranges below 30M certainly can benefit from more TF but more of the factors from here on will be easier found via P1.

Thanks

 lycorn 2021-05-13 17:51

The 26.9 range is free. I took some exponents to 71 bits, the rest is at 70, available for whoever feels like crunching them.

 masser 2021-05-14 04:01

30M has less than 200,000 unfactored exponents!

 petrw1 2021-05-14 04:05

[QUOTE=masser;578371]30M has less than 200,000 unfactored exponents![/QUOTE]

Yeah!!! 2 to go.

 axn 2021-05-14 04:46

[QUOTE=petrw1;578372]Yeah!!! 2 to go.[/QUOTE]

In the last 365 days,
[CODE]10M -1,677
20M -2,326[/CODE]
We need respectively 10k/8k more in those ranges (at all resolutions, 10M, 1M and 0.1M)

 De Wandelaar 2021-05-14 13:09

[QUOTE=petrw1;575294]There are only 3 ranges remaining in the 4xMillions but they are stubborn.

42.6: 47 remaining
48.4: 49
49.6: 49

I have done aggressive P1 on these 3 ranges.
Every exponent is P1'd to at least a 3.5% factor rate; many to a 5.25% factor rate.
I estimate that to continue P1 would require about 500GhzDays per P1 factor.

Therefore, I am suggesting that the preferred next step is TF with a nice GPU farm.
They are currently factored to 74 bits.
To complete these ranges will require full TF to 76 bits; then about half the exponents 77 bits. A total of about 2.5M GhzDays of TF.

Thanks for everyone's help.[/QUOTE]

I'll work on the 42.6 range. There was no TF activity in this range in the 12 last months. I can allocate +/- 2.500 GHz/day per day.

 axn 2021-05-14 14:20

[QUOTE=De Wandelaar;578384]I'll work on the 42.6 range. There was no TF activity in this range in the 12 last months. I can allocate +/- 2.500 GHz/day per day.[/QUOTE]

There are 2043 numbers in that range. TF from 74-75 takes about 90 GHd, so you should be able to complete the range in 10-11 weeks.

Based on prior P-1 done, you should find a factor every 115 numbers (1 every 4 days) or about 15-19 for the whole range. 75-76 will take double that time and find similar number of factors.

BTW, 49.6 might be easier as the TF will be faster, and you're likely to find more factors as the P-1 done is slightly less.

 petrw1 2021-05-14 14:29

[QUOTE=De Wandelaar;578384]I'll work on the 42.6 range. There was no TF activity in this range in the 12 last months. I can allocate +/- 2.500 GHz/day per day.[/QUOTE]

Thanks. I'm doing P1 on these 3 ranges for another month. hoping to get them all under 40 for you. We can do it concurrently. I'll focus on 42.6 first.

I'll start --- mostly--- 42.6 on the low end....if you work from the high end we won't step on each others toes.
I'll be done in under 2 weeks.

PS looking at recent results I can see that Anton Repko is dabbling in 49.6.

 De Wandelaar 2021-05-14 15:30

1 Attachment(s)
[QUOTE=axn;578390]There are 2043 numbers in that range. TF from 74-75 takes about 90 GHd, so you should be able to complete the range in 10-11 weeks.

Based on prior P-1 done, you should find a factor every 115 numbers (1 every 4 days) or about 15-19 for the whole range. 75-76 will take double that time and find similar number of factors.

BTW, 49.6 might be easier as the TF will be faster, and you're likely to find more factors as the P-1 done is slightly less.[/QUOTE]

Thanks for your feed-back, axn. I was aware of the duration of the process but honestly not of the impact of the agressive P-1 on the success rate.

I hope it will progress a little bit faster than foreseen : it takes actually about 45 min to process one case (see attach), so 30-32 per day without too much interruptions. With a little bit luck 9-10 weeks could be enough.

My fear is in fact a too hot summer ...

 De Wandelaar 2021-05-14 15:35

[QUOTE=petrw1;578391]Thanks. I'm doing P1 on these 3 ranges for another month. hoping to get them all under 40 for you. We can do it concurrently. I'll focus on 42.6 first.

I'll start --- mostly--- 42.6 on the low end....if you work from the high end we won't step on each others toes.
I'll be done in under 2 weeks.

PS looking at recent results I can see that Anton Repko is dabbling in 49.6.[/QUOTE]

OK, Wayne. I was already started on the low end but I will stop and begin again from the high end.
Yves

 axn 2021-05-15 03:03

[QUOTE=De Wandelaar;578395]I hope it will progress a little bit faster than foreseen : it takes actually about 45 min to process one case (see attach), so 30-32 per day without too much interruptions. With a little bit luck 9-10 weeks could be enough.[/quote]
You're getting about 2800+ so, yes, it should finish faster

[QUOTE=De Wandelaar;578395]My fear is in fact a too hot summer ...[/QUOTE]
Typically, you should be able to substantially decrease power usage with only a modest decrease in performance by setting max clock / power limit. That should allow you to continue crunching thru a hot summer.

 De Wandelaar 2021-05-15 06:08

[QUOTE=axn;578446]
Typically, you should be able to substantially decrease power usage with only a modest decrease in performance by setting max clock / power limit. That should allow you to continue crunching thru a hot summer.[/QUOTE]

My power limit is already set on 60 % otherwise I could deliver more than 3.300 GHz/day (Quadro RTX 5000).
I try to keep the GPU temperature between 65° en 70° (heat in the room, fan noise, lifespan, plus electricity). I think 54 % power limit is the absolute miminum I can set with Afterburner.

Effectively, the performance decrease is quite modest (+/- 17 %) in comparison with the 40 % power reduction.

 axn 2021-05-15 14:14

You can further drop power usage by limiting max clock speed

For my 1660 Ti, I've set power limit to 70w. However, I'm running it with graphics clock capped at 1300 MHz, dropping the power to 44w (~40% drop) while dropping performance by only 20%.

 De Wandelaar 2021-05-15 15:17

[QUOTE=axn;578476]You can further drop power usage by limiting max clock speed

For my 1660 Ti, I've set power limit to 70w. However, I'm running it with graphics clock capped at 1300 MHz, dropping the power to 44w (~40% drop) while dropping performance by only 20%.[/QUOTE]

Indeed, I didn't think of that.
Thanks for the tip !

 petrw1 2021-05-24 06:11

[QUOTE=De Wandelaar;578397]OK, Wayne. I was already started on the low end but I will stop and begin again from the high end.
Yves[/QUOTE]

[CODE]
Yves de Wandelaar Manual testing 42684091 F 2021-05-24 02:53 0.0 45.3204 Factor: 26817704181153840529049 / TF: 74-75*[/CODE]

 axn 2021-05-24 07:23

He found one on the 18th.

[url]https://www.mersenne.org/report_exponent/?exp_lo=42695969&full=1[/url]

 De Wandelaar 2021-05-24 14:29

[QUOTE=axn;578969]He found one on the 18th.

[url]https://www.mersenne.org/report_exponent/?exp_lo=42695969&full=1[/url][/QUOTE]

Yes, two factors found until now for 321 trials.
The results are on the low side but yet reasonably coherent with axn's forecast.
Hoping the rate will be a little bit higher in the next days/weeks.

 petrw1 2021-05-24 15:39

[QUOTE=axn;578969]He found one on the 18th.

[url]https://www.mersenne.org/report_exponent/?exp_lo=42695969&full=1[/url][/QUOTE]

:redface:

 LaurV 2021-05-26 17:09

Ok, give me a range. Nice if you have a list, but if not, I will grab the bitlevels from PrimeNet myself. Or... is it ok if I try raising 31.9 to 75? It seems untouched for 30 days.

 petrw1 2021-05-26 17:25

[QUOTE=LaurV;579142]Ok, give me a range. Nice if you have a list, but if not, I will grab the bitlevels from PrimeNet myself. Or... is it ok if I try raising 31.9 to 75? It seems untouched for 30 days.[/QUOTE]

If you want to do TF that would be great.
I'm TFing in the 2x.xM ranges currently so the following are all up for grabs.

All of these ranges will eventually need TF75.
35.3
35.1
34.4
33.3
32.7
31.9
31.2
30.8
30.5

The first 3 may need a little bit of TF76 after I P-1 them harshly.
I plan to have P-1 done for these 3x.xM ranges by the end of the year.

If you ever feel like helping with P-1 let me know.

Thanks, Grasshopper

 LaurV 2021-05-26 18:00

Raising a 31.9M to 75 takes 41 minutes on a V100 (colab) and [STRIKE]38[/STRIKE] 36 minutes on my local 2080Ti's (the "tits" are water cooled and a bit OC'd). So, if I get lucky, that would be about 12 days. Meantime I want to see how I can convince Chris to feed me with his 26-29M range, which I see [URL="https://www.gpu72.com/reports/available/"]available[/URL]. If so, I will get rid of the "handwork", and I will switch to his feed instead. Let's see, hopefully we can find few factors, either way.

For P-1, what ranges do you have in mind, and what percents of chances, assuming I can find the hardware and mood? (Just asking)

Edit, ok, Chris "convinced", it seems it is as easy as going to your notebook access keys table and select "DC Already Done" in the second column, than 26M to 71 and 72 start coming. For the local, we will kick Misfit. We are set.

 petrw1 2021-05-26 19:28

[QUOTE=LaurV;579149]Raising a 31.9M to 75 takes 41 minutes on a V100 (colab) and 38 minutes on my local 2080Ti's (the "tits" are water cooled and a bit OC'd). So, if I get lucky, that would be about 12 days. Meantime I want to see how I can convince Chris to feed me with his 26-29M range, which I see [URL="https://www.gpu72.com/reports/available/"]available[/URL]. If so, I will get rid of the "handwork", and I will switch to his feed instead. Let's see, hopefully we can find few factors, either way.

For P-1, what ranges do you have in mind, and what percents of chances, assuming I can find the hardware and mood? (Just asking)

Edit, ok, Chris "convinced", it seems it is as easy as going to your notebook access keys table and select "DC Already Done" in the second column, than 26M to 71 and 72 start coming. For the local, we will kick Misfit. We are set.[/QUOTE]

Thanks. I assume you are using the 2047 classes version of mfaktc?
Your choice whether you work on 31.9 or what Chris can feed you.

If you feel compelled to P1 I'm doing 4.5 to 5.2% (+3.2 to +3.8%) in the 3x.xM.
I'm using B1/B2 of 1M/30M to 1.5M/45M.

 chalsall 2021-05-26 19:29

[QUOTE=LaurV;579149]Edit, ok, Chris "convinced", it seems it is as easy as going to your notebook access keys table and select "DC Already Done" in the second column, than 26M to 71 and 72 start coming. For the local, we will kick Misfit. We are set.[/QUOTE]

You never do anything only a little bit, do you... :wink:

I see three machines using the GPU72 Colab API. Two without trickery; I'll have to drill down on how you injected the 31.9M to 75 work... And how to get the results to auto-submit without an AID... :tu:

 LaurV 2021-05-27 02:02

[QUOTE=chalsall;579153]You never do anything only a little bit, do you... :wink:

I see three machines using the GPU72 Colab API. Two without trickery; I'll have to drill down on how you injected the 31.9M to 75 work... And how to get the results to auto-submit without an AID... :tu:[/QUOTE]

Latins use to say, "una hirundo non facit ver" (one swallow doesn't make a spring), to which I would add "modo duo?" (how abut two?).

[URL="https://www.mersenne.org/report_exponent/?exp_lo=26769991&full=1"]swallow[/URL], [URL="https://www.mersenne.org/report_exponent/?exp_lo=26775113&full=1"]swallow[/URL]

(haha, no pun intended)

------------------
(** the range you assign seems luckier than 31M, hehe, but the real reason is that 26.7M to 72 only takes under 7 minutes per assignment, so 6 of them can be done in the same time one 31.9 to 75 could be done, about)

Edit: the 31.9M results stuck in your lobby can be deleted. I reported them manually (copy/paste from your lobby to manual results page). But honestly, you should make the script to report all in the lobby to the server, without parsing keys. If somebody put scrap there, i.e. not TF results, and not prime-related, etc, there are other ways to deal with it (like block the account, ban the user, kick him in the nose, put a finger into his eye, etc).

 LaurV 2021-05-27 02:31

[URL="https://www.mersenne.org/report_exponent/?exp_lo=31905917&full=1"]swallow[/URL].

:davar55:

:razz:

 lycorn 2021-05-31 08:37

Currently working in the 15M range, 69 -> 70 bits.
Should take a couple of weeks. If someone wants to do some work there, please start from 15.0 so as to avoid any toe stepping (I´m now crunching 15.5 and 15.6 sub ranges and will work my way down).
It would be a good thing that Primenet allowed us to reserve exponents for TF in these (say, below 20M or so) ranges.

 petrw1 2021-06-01 02:10

[QUOTE=De Wandelaar;578397]OK, Wayne. I was already started on the low end but I will stop and begin again from the high end.
Yves[/QUOTE]

It's all yours now...I've done all the P1 that is reasonable
Thanks again

By about mid-June I will have also completed all reasonable P1 for 48.4 and 49.6.
I am hoping to have both ranges down to 40 or less remaining.

The only options will then be TF ... hopefully to 76 bits will do it.
Or ECM or P+1 though neither are very efficient for exponents this high

Thanks all

 De Wandelaar 2021-06-01 06:36

[QUOTE=petrw1;579616]It's all yours now...I've done all the P1 that is reasonable
Thanks again

By about mid-June I will have also completed all reasonable P1 for 48.4 and 49.6.
I am hoping to have both ranges down to 40 or less remaining.

The only options will then be TF ... hopefully to 76 bits will do it.
Or ECM or P+1 though neither are very efficient for exponents this high

Thanks all[/QUOTE]

28 % of the 42.6 range (74->75 bits) done since 14/05. It should be completed about 20/07.
Until now, 5 factors found, 2030 unfactored remaining.

 LaurV 2021-06-02 14:57

[QUOTE=petrw1;464177]Breaking it down I'm thinking if each 100M range has less than 2M unfactored we have the desired end result.
Similarly if each 10M range has less than 200K unfactored...
or each 1M range has less than 20K unfactored...
or each 100K range has less than 2,000 unfactored.
[/QUOTE]
So, why to stop there? :razz:
James' site allows x.xxM ranges for smaller exponents (and only xx.x for larger expos).
So, following the idea, every x.xxM range (i.e. 10k) should have less than 200 unfactored expos.

Arriba, arriba, I went to [URL="https://www.mersenne.ca/status/tf/0/0/5/0"]the site[/URL], as deep as possible, until the green row became deep-pink row, then moved upward (or rightwards) arrow by arrow. The first outlier was [URL="https://www.mersenne.ca/status/tf/0/0/5/170"]1.71M[/URL], with 201 unfactored, then on the second position came [URL="https://www.mersenne.ca/status/tf/0/0/5/180"]1.89M[/URL] with 200 unfactored, so the goal is to find two factors in the first bucket, and one in the second. We raised both ranges one or two bits with TF without luck, then, after a little [URL="https://www.mersenneforum.org/showthread.php?p=579399"]help from the forum[/URL], started playing with P+1 on them ranges.

We are [URL="https://www.mersenne.org/report_exponent/?exp_lo=1891277&full=1"]DONE[/URL] with the second half of it. :party:

We are still continuing P+1 with the first half.

Meantime, we also found like a lot of "swallows" (see above) in 26M range served by Chris, like over 30 factors, from which more than half with colab instances.

Related to your wish that we move to P-1, we may try to pursue Chris to serve us P-1 assignments for our colab instances (and we will take care of the colab side). Maybe that's a good idea, we get rid of the headache with reserving the work, adding it to colab, and reporting the results (right now, manually).

 chalsall 2021-06-02 17:21

[QUOTE=LaurV;579788]Related to your wish that we move to P-1, we may try to pursue Chris to serve us P-1 assignments for our colab instances (and we will take care of the colab side). Maybe that's a good idea, we get rid of the headache with reserving the work, adding it to colab, and reporting the results (right now, manually).[/QUOTE]

Doable. :chalsall:

I've actually been using my fourteen (14#) (CPU only) Colab instances to clean up after those who complete an FTC without first doing a P-1. Sometimes preemptively; in 103M for example...

 pinhodecarlos 2021-06-02 20:13

Happy to join with 10 Colab sessions but will need some guidance.

 chalsall 2021-06-02 21:28

[QUOTE=pinhodecarlos;579806]Happy to join with 10 Colab sessions but will need some guidance.[/QUOTE]

Copy. Thanks. :tu:

 petrw1 2021-06-03 04:10

[QUOTE=LaurV;579788]So, why to stop there? :razz:
James' site allows x.xxM ranges for smaller exponents (and only xx.x for larger expos).
So, following the idea, every x.xxM range (i.e. 10k) should have less than 200 unfactored expos.
[/QUOTE]

I'd have to look back through this thread but someone asked the same question a few years ago.

The problem is that at this fine of a breakdown there are some serious outliers and so few exponents to work with.
Not even trying I found for example (there are worse):
26.70M 232 Unfactored (33 to get under 200).
Each bit level of TF will find about 2 factors with only 232 exponents.
So via TF only that is 16 more bit levels....with luck maybe only 14.
So that is TF 73-87.
That is almost 1,200,000 GhzDays per exponent; 8 months with my 2080Ti per exponent.

Assuming we agree that is beyond reasonable we need to find some factors via P-1.
Lets say we want to save 7 bits of TF so we need about 14 P-1 factors from these 232 exponents.
We need about a 6% increase in current P-1; so we will need to run P-1 with a 10.5% expected success rate to be safe.
That is 100GhzDays per exponent x 232 = 23,200 to get these 14 factors.
The better part of a year for a reasonable PC.

However there is another issue.
TF and P-1 can find the same factors so the more P-1 that is done the lower the chance of finding a TF factor and vice versa.
If we were to find these 14 P-1 factors then the TF success rate will drop; or in other words it will not save 7 levels of TF.

OKAY too much blabbering; you see where I'm going.

------------------------

That all said maybe once the sub-2000 project is complete the hardware will allow us to reconsider sub-200.

 LaurV 2021-06-03 13:45

[QUOTE=petrw1;579841]
The problem is that at this fine of a breakdown there are some serious outliers and so few exponents to work with.[/QUOTE]
Of course you are right, and the fact that it gets more and more difficult as you get deeper into the mud is plain clear. To get under 20M for all Gimps range is the easiest. Then, zoom in and find the outliers, magnitude by magnitude. I am still fighting with 1.71M for a while, but I am not going to do that forever, I also have a life :blush:
But I couldn't stop boasting about the P+1 success in 1.89M and about finding over 30 factors in 26M. Right now, they are more, as I moved to 27M and found 5 factors there today too.

 LaurV 2021-06-09 14:21

yarrr :chappy:

This almost passed unobseved:
[CODE]1717043 F-PP1 Start=2/7, B1=5000000, B2=365000000, Factor: 36120234091485938570203343[/CODE]
One more to go (for which, I am going to raise all 200 candidates one bitlevel, maybe get lucky - and stop if I get lucky).

 axn 2021-06-10 05:45

3-4m is done

The last two ranges 3.7 & 3.9 were done today, thus completing 3-4m.

I had some TF help from anonymous benefactor which greatly accelerated 3.9 range.

Once the pending P-1s from 3.7/3.9 ranges are completed (few days to couple of weeks), I'll move over to 4.1 range. I've been prepping that range with 68-70 TF.

 LaurV 2021-06-12 07:21

Is anybody doing P-1 in [URL="https://www.mersenne.ca/status/tf/0/7/5/3510"]this range[/URL]?

I found about 15 factors in the last 4 days, but only 11 of them are reflected in the table, therefore the 12-th is from somebody else (I don't know which 12th). If that's a TF/P+1/ECM factor, no harm, but if that's a P-1, then we are duplicating the efforts. I started when 2076 candidates were left and (after a discussion with Wayne on PM) I calculated my B1 and B2 to have a 100% theoretic chance of finding 77 factors, considering an average of the TF and P-1 already done. Then, I "rounded up" them to "look nice", i.e. being prime, and containing a sequence of interesting primes too, as a substring :razz: that's how I ended with B1=23-29-31-9 (sorry, -31-3 was not prime), and B2=61-89-1103 (107x was not prime, and 1103 is a factor of M29!). For who's asking. These are nice, round numbers. Don't tell me that numerology is not catchy!

Joking apart, with the default TF of the range the two limits would have a chance to get a factor in 10 trials, but considering that the range was [URL="https://www.mersenne.ca/status/tf/0/0/5/3510"]over-factored, to 74 bits[/URL], the chance is just 1 factor in [URL="https://www.mersenne.ca/prob.php?exponent=35500007&factorbits=74&b1=2329319&b2=61891103"]about 17 or 18 trials[/URL]. However, the range had average P-1 done too, which "pulled out the low hanging fruits", so the real chance is somewhere at 1 in 23 to 1 in 26 (depending on the P-1 already done for each, I didn't calculate exact, only took an "eye average"). This would cover for the 77 factors (which, in 2076, means 1 in about 26.9, plus the luck :razz:). Up to now, it fits, with 77 factors to find I should find about 7.7 in every 35.1x range, and I have 15 in the first 2 ranges, plus a bit.

 axn 2021-06-12 08:20

[CODE]35130749 227470147005389851276513 2021-06-10 Sid & Andy F-PM1 B1=1500000, B2=45000000[/CODE]

 axn 2021-06-12 08:38

[QUOTE=LaurV;580767]However, the range had average P-1 done too, which "pulled out the low hanging fruits", so the real chance is somewhere at 1 in 23 to 1 in 26 (depending on the P-1 already done for each, I didn't calculate exact, only took an "eye average").[/QUOTE]

The real chance is more like 1 in 36, so you might find something like 60 factors. You could fall short by 15 factors in clearing this range with this P-1.

 LaurV 2021-06-12 12:26

Oh. Then I should either raise the limits, or save the checkpoint files to avoid work duplication in the future if me or somebody else will wish to extend the limits. BTW, can gpuOwl extend B1? (I remember getting errors like "wrong B1, using the one from saved checkpoint" or so).

 axn 2021-06-12 13:42

Well, if you want to live on the edge, once you're done with P-1, doing a 74-75 TF on the survivors might _just_ be enough to clear the range.

But, if you want to clean this out with just P-1, you need to increase probability by 1% point more (so, instead of 5.67%, target 6.6%).

All times are UTC. The time now is 23:34.