[QUOTE=petrw1;576382]At the present time Anon is TF'ing 31.5M. Though he may pause if he reads this thread[/QUOTE]
I got the impression he had stopped at 75. But, good point; that range can be cleared by going to 76. axn, please give me 300# in 42.6M instead. Thanks. 
1 Attachment(s)
The attached file has 153 assignments in the range 31.5031.52, the first four of which are:
[CODE]Pplus1=1,2,31500281,1,300000,0,1,75 Pplus1=1,2,31500361,1,400000,0,1,75 Pplus1=1,2,31500457,1,500000,0,1,75 Pplus1=1,2,31500767,1,600000,0,1,75 [/CODE] I have excluded those that have been TF'ed to 76 bits, as well as those that have been relatively poorly P1'ed (the thinking being, those exponents might be better off being P1'ed deeper rather than P+1). I don't know what are optimal parameters, instead I have gone for a range of B1 values from 300k600k. I am also relying on P95 to compute optimal B2, hence B2 is set as 0. If you can run these first four assignment and report back on B2 selection, run times (B1/B2 splits) and probabilities, we can then pick the optimal parameters. Choice of B1 was based on looking at the current state of P1 done in the range and then picking equal or lesser values (since P+1 stage 1 is about half as fast as P1). 
[QUOTE=chalsall;576385]I got the impression he had stopped at 75. But, good point; that range can be cleared by going to 76.
axn, please give me 300# in 42.6M instead. Thanks.[/QUOTE] Oops, didn't see it. Ok, I will whip up something in that range. But the 31.5 is still there if you need. 
42.6 range P+1 assignments
1 Attachment(s)
Ok. Take 2. Attached file has about 300 assignments in the 42.6042.62. First five are:
[CODE]Pplus1=1,2,42600139,1,400000,0,1,75 Pplus1=1,2,42600221,1,500000,0,1,75 Pplus1=1,2,42600289,1,600000,0,1,75 Pplus1=1,2,42600367,1,700000,0,1,75 Pplus1=1,2,42600379,1,800000,0,1,75 [/CODE] If you can run these first five assignments and report back on B2 selection, run times (B1/B2 splits) and probabilities, we can then optimize the parameters for the rest of the runs. 
[QUOTE=petrw1;576359]George: Your posts talks about choosing B1 based on the current ECM B1.
For my case do you have any recommendations for P+1 for choosing B1 based on how much P1 has already been done? I am tempted to assume that P+1 will find different factors than P1 which makes me hopeful even smallish B1/B2 will have reasonable success? Am I totally out to lunch?[/QUOTE] We're learning about best bounds selection together. I suggest picking one exponent and try a B1/B2 combination  plus specify the TF bit level. Start prime95 and it will tell you the chance of finding a factor. Abort, select different B1/B2, run prime95 and look at the chance of finding a factor. Repeat until you have a decent idea as to how bounds correlate with probability. Choose B2 somewhere between 20 and 80 times B1. Are you out to lunch? Somewhat. P+1 will find different factors than P1, but is [B]vastly[/B] inferior to P1. The loss of the "free 2*p" in B1/B2 smoothness for Mersenne numbers where factors are known to be of the form 2*k*p+1 is huge. Add on to that the 50% chance that P+1 won't find the factor even if it is B1/B2 smooth. Depending on TF levels, you're probably looking at 100300 P+1 runs to find a single factor. I fear that for exponents above 20M you're better off extending P1 bounds rather than doing P+1. As you gather data, you may prove my fear wrong. 
[QUOTE=axn;576391]If you can run these first five assignments and report back on B2 selection, run times (B1/B2 splits) and probabilities, we can then optimize the parameters for the rest of the runs.[/QUOTE]
OK. Thanks a lot! The first five are running now. Seems like there's still some Primenet work to do though... :wink: [CODE][Main thread Apr 21 17:19] Mersenne number primality test program version 30.6 [Main thread Apr 21 17:19] Optimizing for CPU architecture: Core i3/i5/i7, L2 cache size: 4x1 MB, L3 cache size: 33 MB [Main thread Apr 21 17:19] Starting worker. [Comm thread Apr 21 17:19] Registering assignment: P+1 M42600221 [Comm thread Apr 21 17:19] PrimeNet error 44: Invalid assignment type [Comm thread Apr 21 17:19] ra: unsupported assignment work type: 6 [Work thread Apr 21 17:19] Worker starting [Work thread Apr 21 17:19] Setting affinity to run worker on CPU core #1 [Work thread Apr 21 17:19] [Work thread Apr 21 17:19] P+1 on M42600221 with B1=500000, B2=TBD [Work thread Apr 21 17:19] Setting affinity to run helper thread 2 on CPU core #3 [Work thread Apr 21 17:19] Using AVX512 FFT length 2240K, Pass1=128, Pass2=17920, clm=4, 4 threads [Work thread Apr 21 17:19] Setting affinity to run helper thread 3 on CPU core #4 [Work thread Apr 21 17:19] Setting affinity to run helper thread 1 on CPU core #2 [Comm thread Apr 21 17:19] Done communicating with server. [Work thread Apr 21 17:20] M42600221 stage 1 is 0.78% complete. Time: 35.714 sec. [Work thread Apr 21 17:20] M42600221 stage 1 is 1.71% complete. Time: 35.840 sec. [Work thread Apr 21 17:21] M42600221 stage 1 is 2.63% complete. Time: 36.335 sec. [/CODE] 
We do know that P+1 is better than ECM  roughly the same chance of success but several times faster.
We do know that TF becomes more and more expensive the smaller the exponent. We also know that deep P1 has been done on almost all "small" exponents. For the PRPCF and this 20M project, a coordinated P+1 attack on exponents below say 5 or 10 million should be worthwhile. It might be nice to start a new thread to do the coordination and reach a consensus on the target B1 for the various exponent ranges. 
[QUOTE=chalsall;576396]
Seems like there's still some Primenet work to do though... :wink:[/QUOTE] Primenet accepts P+1 results, but does not recognize/coordinate P+1 assignments. 
[QUOTE=chalsall;576396]OK. Thanks a lot! The first five are running now.
Seems like there's still some Primenet work to do though... :wink: [/QUOTE] [QUOTE=Prime95;576398]Primenet accepts P+1 results, but does not recognize/coordinate P+1 assignments.[/QUOTE] Ok, I guess we need to add N/A at the beginning to avoid assignment registration attempt, right? (or UsePrimenet=0 / manually report) 
This is crude, but I have created a placeholder page to list what known P+1 efforts have been recorded:
[url]https://www.mersenne.ca/pplus1.php[/url] Note of course that my data will always be up to 24h out of date (synch'ed just after midnight UTC). Will get a better report on mersenne.org (at least if George can email me where to find the P+1 data). 
[QUOTE=Prime95;576394]
Are you out to lunch? Somewhat. P+1 will find different factors than P1, but is [B]vastly[/B] inferior to P1. The loss of the "free 2*p" in B1/B2 smoothness for Mersenne numbers where factors are known to be of the form 2*k*p+1 is huge. Add on to that the 50% chance that P+1 won't find the factor even if it is B1/B2 smooth. Depending on TF levels, you're probably looking at 100300 P+1 runs to find a single factor. I fear that for exponents above 20M you're better off extending P1 bounds rather than doing P+1. As you gather data, you may prove my fear wrong.[/QUOTE] Yes seems P1 is still better. Though the odds of finding a factor per assignment is dropping in these stubborn ranges its still in the 5080 range. 
[QUOTE=Prime95;576398]Primenet accepts P+1 results, but does not recognize/coordinate P+1 assignments.[/QUOTE]
OK... Here are the log entries from my fastest machine for those who understand the maths behind all this. The probability percentage is indeed low... This is for [URL="https://www.mersenne.org/report_exponent/?exp_lo=42600139&full=1"]42600139[/URL]. [CODE] [Main thread Apr 21 13:11] Mersenne number primality test program version 30.6 [Main thread Apr 21 13:11] Optimizing for CPU architecture: Core i3/i5/i7, L2 cache size: 6x256 KB, L3 cache size: 12 MB [Main thread Apr 21 13:11] Starting worker. [Comm thread Apr 21 13:11] Updating computer information on the server [Work thread Apr 21 13:11] Worker starting [Work thread Apr 21 13:11] Setting affinity to run worker on CPU core #1 [Work thread Apr 21 13:11] [Work thread Apr 21 13:11] P+1 on M42600139 with B1=400000, B2=TBD [Work thread Apr 21 13:12] Setting affinity to run helper thread 5 on CPU core #6 [Work thread Apr 21 13:12] Setting affinity to run helper thread 4 on CPU core #5 [Work thread Apr 21 13:12] Setting affinity to run helper thread 3 on CPU core #4 [Work thread Apr 21 13:12] Using FMA3 FFT length 2304K, Pass1=384, Pass2=6K, clm=2, 6 threads [Work thread Apr 21 13:12] Setting affinity to run helper thread 2 on CPU core #3 [Work thread Apr 21 13:12] Setting affinity to run helper thread 1 on CPU core #2 [Work thread Apr 21 13:12] M42600139 stage 1 is 1.00% complete. Time: 42.031 sec. [Work thread Apr 21 13:13] M42600139 stage 1 is 2.16% complete. Time: 42.031 sec. [Work thread Apr 21 13:14] M42600139 stage 1 is 3.31% complete. Time: 42.133 sec. [Work thread Apr 21 14:11] M42600139 stage 1 is 96.63% complete. Time: 42.167 sec. [Work thread Apr 21 14:11] M42600139 stage 1 is 97.80% complete. Time: 42.217 sec. [Work thread Apr 21 14:12] M42600139 stage 1 is 98.90% complete. Time: 42.254 sec. [Work thread Apr 21 14:13] M42600139 stage 1 complete. 1020144 transforms. Time: 2152.052 sec. [Work thread Apr 21 14:13] Stage 1 GCD complete. Time: 8.352 sec. [Work thread Apr 21 14:13] With trial factoring done to 2^75, optimal B2 is 42*B1 = 16800000. [Work thread Apr 21 14:13] Chance of a new factor assuming no ECM has been done is 0.274% [Work thread Apr 21 14:13] D: 630, relative primes: 1439, stage 2 primes: 1045406, pair%=94.24 [Work thread Apr 21 14:13] Using 26374MB of memory. [Work thread Apr 21 14:13] Stage 2 init complete. 4243 transforms. Time: 13.237 sec. [Work thread Apr 21 14:14] M42600139 stage 2 is 1.67% complete. Time: 35.680 sec. [Work thread Apr 21 14:14] M42600139 stage 2 is 3.35% complete. Time: 35.685 sec. [Work thread Apr 21 14:15] M42600139 stage 2 is 5.05% complete. Time: 35.688 sec. [Work thread Apr 21 14:46] M42600139 stage 2 is 95.36% complete. Time: 35.681 sec. [Work thread Apr 21 14:47] M42600139 stage 2 is 97.08% complete. Time: 35.682 sec. [Work thread Apr 21 14:47] M42600139 stage 2 is 98.79% complete. Time: 35.678 sec. [Work thread Apr 21 14:48] M42600139 stage 2 complete. 1154122 transforms. Time: 2060.598 sec. [Work thread Apr 21 14:48] Stage 2 GCD complete. Time: 8.313 sec. [Work thread Apr 21 14:48] M42600139 completed P+1, B1=400000, B2=16800000, Wi8: 4F3C1C20 [Comm thread Apr 21 14:48] Sending result to server: UID: wabbit/rdt1, M42600139 completed P+1, B1=400000, B2=16800000, Wi8: 4F3C1C20 [/CODE] 
So, 1H30 for a 0.274% chance of finding a factor?
Should one run a normal P+1 (3 tries) , 5 H for a less than a percent chance of finding a factor? It is indeed very low. 
And unless P+1 takes into account how much P1 has already been done it will be lower yet.
Or is prior P1 not relevant to the success rate if P+1. 
[QUOTE=petrw1;576417]And unless P+1 takes into account how much P1 has already been done it will be lower yet.
Or is prior P1 not relevant to the success rate if P+1.[/QUOTE] P1 and P+1 search space is almost completely independent. 
[QUOTE=firejuggler;576416]So, 1H30 for a 0.274% chance of finding a factor?
Should one run a normal P+1 (3 tries) , 5 H for a less than a percent chance of finding a factor? It is indeed very low.[/QUOTE] You'll get better chances with smaller exponents  less TF has been done. I was getting 1+% estimates in the 4.7M range with B1=1000000. 
[QUOTE=Prime95;576394]
I suggest picking one exponent and try a B1/B2 combination  plus specify the TF bit level. Start prime95 and it will tell you the chance of finding a factor. Abort, select different B1/B2, run prime95 and look at the chance of finding a factor. Repeat until you have a decent idea as to how bounds correlate with probability. [/QUOTE] You also need to clear the Pplus1BestB2 option to get the quick probability at startup. Now that I've fixed the crash bug reading stage 2 save file, I'll gather some of this data and post it here. 
31M expo, TF'ed to 2^75
[CODE][Apr 21 16:05] P+1 on M31500457 with B1=250000, B2=5000000 [Apr 21 16:05] Chance of finding a factor assuming no ECM has been done is an estimated 0.16% [Apr 21 16:06] P+1 on M31500457 with B1=250000, B2=10000000 [Apr 21 16:06] Chance of finding a factor assuming no ECM has been done is an estimated 0.194% [Apr 21 16:06] P+1 on M31500457 with B1=250000, B2=20000000 [Apr 21 16:06] Chance of finding a factor assuming no ECM has been done is an estimated 0.232% [Apr 21 16:06] P+1 on M31500457 with B1=500000, B2=10000000 [Apr 21 16:06] Chance of finding a factor assuming no ECM has been done is an estimated 0.261% [Apr 21 16:07] P+1 on M31500457 with B1=500000, B2=20000000 [Apr 21 16:07] Chance of finding a factor assuming no ECM has been done is an estimated 0.313% [Apr 21 16:07] P+1 on M31500457 with B1=500000, B2=40000000 [Apr 21 16:07] Chance of finding a factor assuming no ECM has been done is an estimated 0.37% [Apr 21 16:08] P+1 on M31500457 with B1=1000000, B2=20000000 [Apr 21 16:08] Chance of finding a factor assuming no ECM has been done is an estimated 0.399% [Apr 21 16:08] P+1 on M31500457 with B1=1000000, B2=40000000 [Apr 21 16:08] Chance of finding a factor assuming no ECM has been done is an estimated 0.475% [Apr 21 16:08] P+1 on M31500457 with B1=1000000, B2=80000000 [Apr 21 16:08] Chance of finding a factor assuming no ECM has been done is an estimated 0.556% [/CODE] 
Initial emprical data from five runs...
These next four runs (with different B1s as given by axn) were run on GCE 8 vcore instances with 30G of RAM available to them:
[CODE] [Work thread Apr 21 17:19] P+1 on M42600221 with B1=500000, B2=TBD [Work thread Apr 21 17:19] Setting affinity to run helper thread 2 on CPU core #3 [Work thread Apr 21 17:19] Using AVX512 FFT length 2240K, Pass1=128, Pass2=17920, clm=4, 4 threads [Work thread Apr 21 17:19] Setting affinity to run helper thread 3 on CPU core #4 [Work thread Apr 21 17:19] Setting affinity to run helper thread 1 on CPU core #2 [Work thread Apr 21 17:20] M42600221 stage 1 is 0.78% complete. Time: 35.714 sec. [Work thread Apr 21 18:28] M42600221 stage 1 complete. 2177465 transforms. Time: 4116.695 sec. [Work thread Apr 21 18:28] Stage 1 GCD complete. Time: 11.694 sec. [Work thread Apr 21 18:28] With trial factoring done to 2^75, optimal B2 is 44*B1 = 22000000. [Work thread Apr 21 18:28] Chance of a new factor assuming no ECM has been done is 0.321% [Work thread Apr 21 18:28] D: 630, relative primes: 1683, stage 2 primes: 1347723, pair%=95.30 [Work thread Apr 21 18:28] Using 29991MB of memory. [Work thread Apr 21 18:28] Stage 2 init complete. 4961 transforms. Time: 21.750 sec. [Work thread Apr 21 18:29] M42600221 stage 2 is 1.30% complete. Time: 33.963 sec. [Work thread Apr 21 19:08] M42600221 stage 2 is 99.03% complete. Time: 32.330 sec. [Work thread Apr 21 19:09] M42600221 stage 2 complete. 1474580 transforms. Time: 2416.992 sec. [Work thread Apr 21 19:09] Stage 2 GCD complete. Time: 11.498 sec. [Work thread Apr 21 19:09] M42600221 completed P+1, B1=500000, B2=22000000, Wi8: 4F793D3E [Comm thread Apr 21 19:09] Sending result to server: UID: ***/GCE_2, M42600221 completed P+1, B1=500000, B2=22000000, Wi8: 4F793D3E [Work thread Apr 21 17:19] P+1 on M42600289 with B1=600000, B2=TBD [Work thread Apr 21 17:20] M42600289 stage 1 is 0.64% complete. Time: 36.778 sec. [Work thread Apr 21 18:40] M42600289 stage 1 is 99.56% complete. Time: 37.153 sec. [Work thread Apr 21 18:40] M42600289 stage 1 complete. 2614375 transforms. Time: 4828.669 sec. [Work thread Apr 21 18:40] Stage 1 GCD complete. Time: 11.328 sec. [Work thread Apr 21 18:40] With trial factoring done to 2^75, optimal B2 is 46*B1 = 27600000. [Work thread Apr 21 18:40] Chance of a new factor assuming no ECM has been done is 0.364% [Work thread Apr 21 18:40] D: 630, relative primes: 1683, stage 2 primes: 1669036, pair%=95.27 [Work thread Apr 21 18:40] Using 29992MB of memory. [Work thread Apr 21 18:40] Stage 2 init complete. 4961 transforms. Time: 20.257 sec. [Work thread Apr 21 18:41] M42600289 stage 2 is 1.05% complete. Time: 31.121 sec. [Work thread Apr 21 19:28] M42600289 stage 2 is 99.59% complete. Time: 31.242 sec. [Work thread Apr 21 19:28] M42600289 stage 2 complete. 1827707 transforms. Time: 2857.538 sec. [Work thread Apr 21 19:28] Stage 2 GCD complete. Time: 11.302 sec. [Work thread Apr 21 19:28] M42600289 completed P+1, B1=600000, B2=27600000, Wi8: 4FBB23DF [Comm thread Apr 21 19:28] Sending result to server: UID: ***/GCE_1, M42600289 completed P+1, B1=600000, B2=27600000, Wi8: 4FBB23DF [Work thread Apr 21 17:20] P+1 on M42600367 with B1=700000, B2=TBD [Work thread Apr 21 17:20] M42600367 stage 1 is 0.53% complete. Time: 36.484 sec. [Work thread Apr 21 18:53] M42600367 stage 1 is 99.83% complete. Time: 36.878 sec. [Work thread Apr 21 18:53] M42600367 stage 1 complete. 3048479 transforms. Time: 5579.300 sec. [Work thread Apr 21 18:53] Stage 1 GCD complete. Time: 11.253 sec. [Work thread Apr 21 18:53] With trial factoring done to 2^75, optimal B2 is 47*B1 = 32900000. [Work thread Apr 21 18:53] Chance of a new factor assuming no ECM has been done is 0.401% [Work thread Apr 21 18:53] D: 630, relative primes: 1683, stage 2 primes: 1969321, pair%=95.19 [Work thread Apr 21 18:53] Using 29994MB of memory. [Work thread Apr 21 18:54] M42600367 stage 2 is 0.89% complete. Time: 31.187 sec. [Work thread Apr 21 19:49] M42600367 stage 2 is 99.13% complete. Time: 31.257 sec. [Work thread Apr 21 19:50] M42600367 stage 2 complete. 2158931 transforms. Time: 3377.253 sec. [Work thread Apr 21 19:50] Stage 2 GCD complete. Time: 11.398 sec. [Work thread Apr 21 19:50] M42600367 completed P+1, B1=700000, B2=32900000, Wi8: 4C34DDA9 [Comm thread Apr 21 19:50] Sending result to server: UID: ***/GCE_3, M42600367 completed P+1, B1=700000, B2=32900000, Wi8: 4C34DDA9 [Work thread Apr 21 17:20] P+1 on M42600379 with B1=800000, B2=TBD [Work thread Apr 21 17:21] M42600379 stage 1 is 0.46% complete. Time: 34.118 sec. [Work thread Apr 21 18:59] M42600379 stage 1 is 99.95% complete. Time: 34.114 sec. [Work thread Apr 21 18:59] M42600379 stage 1 complete. 3486053 transforms. Time: 5931.664 sec. [Work thread Apr 21 18:59] Stage 1 GCD complete. Time: 10.011 sec. [Work thread Apr 21 18:59] With trial factoring done to 2^75, optimal B2 is 48*B1 = 38400000. [Work thread Apr 21 18:59] Chance of a new factor assuming no ECM has been done is 0.437% [Work thread Apr 21 18:59] D: 630, relative primes: 1683, stage 2 primes: 2278054, pair%=95.13 [Work thread Apr 21 18:59] Using 29996MB of memory. [Work thread Apr 21 18:59] Stage 2 init complete. 4963 transforms. Time: 18.917 sec. [Work thread Apr 21 19:00] M42600379 stage 2 is 0.77% complete. Time: 28.566 sec. [Work thread Apr 21 19:58] M42600379 stage 2 is 99.22% complete. Time: 28.586 sec. [Work thread Apr 21 19:59] M42600379 stage 2 complete. 2499826 transforms. Time: 3571.213 sec. [Work thread Apr 21 19:59] Stage 2 GCD complete. Time: 9.984 sec. [Work thread Apr 21 19:59] M42600379 completed P+1, B1=800000, B2=38400000, Wi8: 4C474AA4 [Comm thread Apr 21 19:59] Sending result to server: UID: xxx/GCE_4, M42600379 completed P+1, B1=800000, B2=38400000, Wi8: 4C474AA4 [/CODE] 
Would the sub 100k exponent(those with no know factors) get any help with PP1 (those which were ecmed to T60 or above)?

15.75M expo, 2^69 TF
[CODE][Apr 21 16:14] P+1 on M15750199 with B1=250000, B2=5000000 [Apr 21 16:14] Chance of finding a factor assuming no ECM has been done is an estimated 0.366% [Apr 21 16:15] P+1 on M15750199 with B1=250000, B2=10000000 [Apr 21 16:15] Chance of finding a factor assuming no ECM has been done is an estimated 0.441% [Apr 21 16:15] P+1 on M15750199 with B1=250000, B2=20000000 [Apr 21 16:15] Chance of finding a factor assuming no ECM has been done is an estimated 0.52% [Apr 21 16:15] P+1 on M15750199 with B1=500000, B2=10000000 [Apr 21 16:15] Chance of finding a factor assuming no ECM has been done is an estimated 0.56% [Apr 21 16:16] P+1 on M15750199 with B1=500000, B2=20000000 [Apr 21 16:16] Chance of finding a factor assuming no ECM has been done is an estimated 0.666% [Apr 21 16:16] P+1 on M15750199 with B1=500000, B2=40000000 [Apr 21 16:16] Chance of finding a factor assuming no ECM has been done is an estimated 0.779% [Apr 21 16:16] P+1 on M15750199 with B1=1000000, B2=20000000 [Apr 21 16:16] Chance of finding a factor assuming no ECM has been done is an estimated 0.811% [Apr 21 16:17] P+1 on M15750199 with B1=1000000, B2=40000000 [Apr 21 16:17] Chance of finding a factor assuming no ECM has been done is an estimated 0.957% [Apr 21 16:17] P+1 on M15750199 with B1=1000000, B2=80000000 [Apr 21 16:17] Chance of finding a factor assuming no ECM has been done is an estimated 1.11% [/CODE] 
4.7M expo, TF to 2^68
On further thought exponent size does not affect probability calculations. Only TF,B1,B2. [CODE][Apr 21 16:25] P+1 on M4715201 with B1=250000, B2=5000000 [Apr 21 16:25] Chance of finding a factor assuming no ECM has been done is an estimated 0.419% [Apr 21 16:25] P+1 on M4715201 with B1=250000, B2=10000000 [Apr 21 16:25] Chance of finding a factor assuming no ECM has been done is an estimated 0.503% [Apr 21 16:25] P+1 on M4715201 with B1=250000, B2=20000000 [Apr 21 16:25] Chance of finding a factor assuming no ECM has been done is an estimated 0.593% [Apr 21 16:26] P+1 on M4715201 with B1=500000, B2=10000000 [Apr 21 16:26] Chance of finding a factor assuming no ECM has been done is an estimated 0.633% [Apr 21 16:26] P+1 on M4715201 with B1=500000, B2=20000000 [Apr 21 16:26] Chance of finding a factor assuming no ECM has been done is an estimated 0.753% [Apr 21 16:26] P+1 on M4715201 with B1=500000, B2=40000000 [Apr 21 16:26] Chance of finding a factor assuming no ECM has been done is an estimated 0.879% [Apr 21 16:26] P+1 on M4715201 with B1=1000000, B2=20000000 [Apr 21 16:26] Chance of finding a factor assuming no ECM has been done is an estimated 0.91% [Apr 21 16:26] P+1 on M4715201 with B1=1000000, B2=40000000 [Apr 21 16:26] Chance of finding a factor assuming no ECM has been done is an estimated 1.07% [Apr 21 16:27] P+1 on M4715201 with B1=1000000, B2=80000000 [Apr 21 16:27] Chance of finding a factor assuming no ECM has been done is an estimated 1.24% [/CODE] 
[QUOTE=Prime95;576430]On further thought exponent size does not affect probability calculations. Only TF,B1,B2.[/QUOTE]
Do you have any gut feeling for how accurate the estimates are? 
[QUOTE=firejuggler;576428]Would the sub 100k exponent(those with no know factors) get any help with PP1 (those which were ecmed to T60 or above)?[/QUOTE]
One P+1 run would be just like running one more ECM curve. The P+1 "curve" stands little chance of success just like running one ECM curve. But at least P+1 would be faster than the one ECM curve. So, no, you're not likely to find a factor, but yes it is worth doing. Again, if the current ECM level is B1=44M then I'd do at least P+1 B1=500M to take advantage of P+1's faster stage 1. Also, stay away from the really small exponents (say sub50K) where GMPECM with its FFT stage 2 would be the better choice. 
[QUOTE=chalsall;576431]Do you have any gut feeling for how accurate the estimates are?[/QUOTE]
Unless, I've misunderstood the math (not an insignificant possibility) or there is a bug, then the estimate should be spot on. I'm using Mihai's P1 smoothness probability estimator. Whereas, P1 get 20+ free bits of smoothness due to the known 2*p in factors, P+1 gets only a couple of free bits due to Peter Montgomery's ingenuity. 
I see these results
[CODE]Architects Cubed rdt1 42600139 NFPP1 20210421 18:53 0.0 3.2837 Start=2/7, B1=400000, B2=16800000
Architects Cubed GCE_2 42600221 NFPP1 20210421 19:09 0.0 4.0777 Start=2/7, B1=500000, B2=22000000 Architects Cubed GCE_1 42600289 NFPP1 20210421 19:28 0.0 5.0302 Start=2/7, B1=600000, B2=27600000 Architects Cubed GCE_3 42600367 NFPP1 20210421 19:50 0.0 5.9485 Start=2/7, B1=700000, B2=32900000 Architects Cubed GCE_4 42600379 NFPP1 20210421 19:59 0.0 6.8896 Start=2/7, B1=800000, B2=38400000[/CODE] 
[QUOTE=James Heinrich;576405]This is crude, but I have created a placeholder page to list what known P+1 efforts have been recorded:
[url]https://www.mersenne.ca/pplus1.php[/url] Note of course that my data will always be up to 24h out of date (synch'ed just after midnight UTC). Will get a better report on mersenne.org (at least if George can email me where to find the P+1 data).[/QUOTE]There is now a version on mersenne.org with access to live data. Still needs to be prettied up with filtering parameters and such, but it's a start: [url]https://www.mersenne.org/report_pplus1/[/url] 
Thanks for the page! :smile: Maybe you could add a NF/F column? I'm eager to see when the first new factor gets found with P+1.

[QUOTE=kruoli;576442]Thanks for the page! :smile: Maybe you could add a NF/F column? I'm eager to see when the first new factor gets found with P+1.[/QUOTE]The mersenne.ca page includes factors, if any.
The mersenne.org data table George pointed me to does not include information about factors, I'll have ask him about that. 
[QUOTE=James Heinrich;576439]There is now a version on mersenne.org with access to live data. Still needs to be prettied up with filtering parameters and such, but it's a start:
[url]https://www.mersenne.org/report_pplus1/[/url][/QUOTE] Coolness! Thanks!!! And you know where this leads, don't you...? You'll now have to do a cost/benefit analysis to determine where the economic crossover points are between ECM, deep TF'ing, deep P1'ing, and this new P+1!!! As a function of 0.1M range, please... :smile: 
[QUOTE=chalsall;576450]You'll now have to do a cost/benefit analysis to determine where the economic crossover points are between ECM, deep TF'ing, deep P1'ing, and this new P+1!!! As a function of 0.1M range, please... :smile:[/QUOTE]Easily done. P+1 has [I]never[/I] found a Mersenne factor, therefore optimal effort to spend on it is zero. :razz:

[QUOTE=chalsall;576413]OK... Here are the log entries from my fastest machine for those who understand the maths behind all this. The probability percentage is indeed low... This is for [URL="https://www.mersenne.org/report_exponent/?exp_lo=42600139&full=1"]42600139[/URL].
[CODE] [Work thread Apr 21 14:13] With trial factoring done to 2^75, optimal B2 is 42*B1 = 16800000. [Work thread Apr 21 14:13] Chance of a new factor assuming no ECM has been done is 0.274% [Comm thread Apr 21 14:48] Sending result to server: UID: wabbit/rdt1, M42600139 completed P+1, B1=400000, B2=16800000, Wi8: 4F3C1C20 [/CODE][/QUOTE] If I have the gozintas correct then if you do all 2050 42.6M exponents with these bounds you should find 5 or 6 factors for 6,731 GhzDays. About 1,200 per factor. There are only about 600 exponents left that I could still reasonably P1 with aggressive bounds for an expected success rate of about 1.5% (9 factors) for about 5,000 GhzDays; about 550 per factor. After that P1 becomes very expensive with the success rate dropping to about 0.5%; and 1,700+ GhzDays per factor. Comparatively to find the remaining 47 factors via TF would take about 800,000 GhzDays. So...the only actual comparison I can do is from my own farm: Option 1: If I do all the remaining with TF it would take 170 days for my 2080Ti. Option 2: If I first complete the above 600 P1 assignments that would take about 18 days on my 20 cores (5 PCs). 290 GhzDays/Day I then TF for the remaining 38 factors in 109 days. Option 2 total is 127 days. Option 3: If I complete the P1 in 18 days. Then I do the P+1 (6,731 GhzDays) that is another 23 days. Now if I do find 15 factors by P+/1 the remaining 32 TF factors would take about 400,000 GhzDays or 85 calendar days. The total in option 2 is 126 days. So personally the P+1 benefit seems minimal. However, if my farm had less CPUs or conversely if I had a 1080Ti instead of a 2080Ti the bottom lines would be quite different. Does anyone see it different? 
[QUOTE=petrw1;576453]Does anyone see it different?[/QUOTE]
Seems your personal goals are limited by GPU time, so option 3 appears fastest you can run CPU tasks and GPU tasks at the same time and finish in 85 days. But then, you could also add 10% to remaining P1 bounds and maybe still find factors faster than TF, further reducing required GPU time. 
[QUOTE=VBCurtis;576460]Seems your personal goals are limited by GPU time, so option 3 appears fastest you can run CPU tasks and GPU tasks at the same time and finish in 85 days. But then, you could also add 10% to remaining P1 bounds and maybe still find factors faster than TF, further reducing required GPU time.[/QUOTE]
Makes sense. At the present time there are more total contributions than CPU. In that state it is better to NOT go to extreme P+/1. If GPU resources diminish then I will lean to MORE P+/1. 
[QUOTE=kruoli;576442]Thanks for the page! :smile: Maybe you could add a NF/F column? I'm eager to see when the first new factor gets found with P+1.[/QUOTE]There is now a factor column.

[QUOTE=James Heinrich;576452]Easily done. P+1 has [I]never[/I] found a Mersenne factor, therefore optimal effort to spend on it is zero. :razz:[/QUOTE]
User "nordi" found the first factor: [M]287873[/M] : 167460871862758047584571103871 30 digits and 97.1 bits, nice. From the timestamps it was the 92nd reported curve, but he reported 52 more in the same batch after the factor, so really it was the first factor in 144 curves from all up to and including 20210422 06:48:xx. 
Cunching the numbers on chalsall's GCE runs:
[CODE]6532 0.321 0.176913655848132 7685 0.364 0.170513988288874 8956 0.401 0.161188030370701 9502 0.437 0.165565144180173 [/CODE] That is Runtime(s), Prob(%), Prob/Hr. As you can see, with increased B1, there is a slight drop in Prob/Hr metric, but it is fairly flat. That means we could probably go with higher B1/B2 and still be similarly productive. However, it is still very low  1 factor every 600 hours on a 4 core GCE machine One thing that puzzles me is the fairly low B2/B1 ratio. Given how much slower stage 1 is compared to P1 stage 1, I would've expected it to be a lot higher. I would like to try out higher B2/B1 ratios (maybe 100x). Give me some time to get some probability estimates using the higher ratio. I have enough runtime data to accurately model Prob/Hr on that GCE machine  I just need the probability figures. 
1 Attachment(s)
So after further crunching, I noticed that the GCE machines were producing inconsistent timings. After correcting for it, there was a much more pronounced loss of efficiency (prob/hr) as you go higher B1. So considering that, I think B1=600k is about as high as I would recommend.
The good news is that, I made a mistake in the previous assignment. I kept the TF depth as 75, when actual depth for 42.6 range should've been 74. That improves the probability a bit. Keeping that in mind, here are the updated assignments. I am still leaving B2 selection up to P95. There was no improvement going to larger B2. I have removed the 5 that were completed. The probability of success is about 1 factor in 500 hours of a 4 core GCE (give or take  there is natural variability in machine performance). It is up to you whether you consider this worthwhile. 
[QUOTE=VBCurtis;576460]Seems your personal goals are limited by GPU time, so option 3 appears fastest you can run CPU tasks and GPU tasks at the same time and finish in 85 days. But then, you could also add 10% to remaining P1 bounds and maybe still find factors faster than TF, further reducing required GPU time.[/QUOTE]
I second the recommendation to run CPU and GPU tasks simultaneously. Regarding the P+1, I see two arguments for it. First, in some domains, tasks that take 23 days to save 1 day are definitely valued. Second, P+1 is new for the project, so collecting some data at this stage could be very helpful to the project. One more reminder about your estimates, although I'm sure you know this, each task is slightly parasitic to the other tasks in terms of lowering probabilities or number of factors found. For instance, TF will find factors that P+1 or P1 might find. Of course, each method will be the quickest way to find a certain factor; too bad we don't know in advance. 
[QUOTE=axn;576477]The probability of success is about 1 factor in 500 hours of a 4 core GCE (give or take  there is natural variability in machine performance). It is up to you whether you consider this worthwhile.[/QUOTE]
Thank you for your efforts. But for my ranges, I think doing deep P1'ing is more costeffective. 
[QUOTE=chalsall;576495]Thank you for your efforts. But for my ranges, I think doing deep P1'ing is more costeffective.[/QUOTE]
Understood. 
I noticed some 69>70 bits TF work in the 16M range, so to avoid toe stepping please note that I have now taken the remaining 16.2x M range from 69 to 70 bits.

Clarification: work still in progress. 2/3 days to go.

April 24,2021 Update
22 more ranges cleared:
3.5, 3.8, 4.6, 5.0, 7.5, 8.0, 8.3, 10.3, 17.2, 17.8, 19.6, 23.1, 27.1, 28.3, 28.4, 28.7, 31.5, 34.7, 38,7. 40.1, 43.0, 43.3 TOTALS to date: 267 total ranges cleared or 53.72% 1,994 more factored (30,731)....55.64% total factored. My current activity/status: There are 3 ranges remaining in 4xM. HOWEVER ... they are stubborn; close to 50 each remaining. There seems to be a reluctance (rightly so) to hit them with TF. Therefore, I will try more P1 and maybe some P+1 ([URL="https://www.mersenneforum.org/showpost.php?p=576305&postcount=150"]new feature here)[/URL] to try to get them under 40. Then TF to 76 should do it. I've started deep P1 in 3xM from highest to lowest. 39.6 will be done P1 in a week or so. Then I'll move back to the 3 remaining 4xM for a month or two. There continues to be good P1 and TF work in all remaining ranges below 30.0M. Thanks all 
So, about half the B1 of a pm1 run? no matter how 'high' the exponent is? ie 600k for my current 23.5M range?

[QUOTE=firejuggler;577034]So, about half the B1 of a pm1 run? no matter how 'high' the exponent is? ie 600k for my current 23.5M range?[/QUOTE]
Based on what I've read and marginally understood it would seem that for exponents this high P+1 is not real effective (expected factor rate per GhzDay is quite a bit higher than P1). It could be used as a last resort if decently aggressive P1 and TF fall short. 
Sorry I moved all our posts again to a new thread:
[url]https://mersenneforum.org/showthread.php?t=26750[/url] Somehow I thought it would fit in here, but we are just interrupting your group goals. 
[QUOTE=ATH;577070]Sorry I moved all our posts again to a new thread:
[url]https://mersenneforum.org/showthread.php?t=26750[/url] Somehow I thought it would fit in here, but we are just interrupting your group goals.[/QUOTE] Don't worry about interrupting. It just might help. 
Update on ranges I´m working on:
16M range: taking the remaining exponents at 69 bits to 70 bits 26.95x xxx to 27M: from 70 to 71 bits (again, I was unable to reserve the range, if bothering someone, pls give me a shout and I´ll stop). There are now just [B]eleven[/B] 1M ranges < 1000M with 21000 or more exponents to factor. I´ll keep on doing some work there. 
[QUOTE=lycorn;577380]
26.95x xxx to 27M: from 70 to 71 bits (again, I was unable to reserve the range, if bothering someone, pls give me a shout and I´ll stop)..[/QUOTE] This shows none. [url]https://www.mersenne.org/assignments/?exp_lo=26950000&exp_hi=27000000[/url] 
True. And that's the problem: the server, for some reason, won 't allow tf reservations in some ranges. But there may be people "informally" working there (read: doing unregistered work). That' s why I asked.

There has been no activity (factors found and/or exponents TF'ed) in the last 60 days in that region: [url]https://www.mersenne.ca/status/tf/0/60/5/2690[/url]

Good. Hopefully you'll see some over the next couple of days, if the Colab powers are in a good mood. 😏

Stick a fork in it, 'cause ...
... 3.4 is done.
Got some help from ECM as the range is now being worked in GIMPS. Once I'm done with the current batch of P1 (another 6 days), I'll be moving on to 3.7 
[QUOTE=axn;577789]... 3.4 is done.
Got some help from ECM as the range is now being worked in GIMPS. Once I'm done with the current batch of P1 (another 6 days), I'll be moving on to 3.7[/QUOTE] :tu: 
I´ve finished TFing the 16M range to 70 bits. 113 new factors found. The whole range is up for grabs, 21032 to go.
Planning to move to 11.9M shortly. Still fiddling with 26.9, using Colab. Slow progress, as the availability of GPUs is not great. 
[QUOTE=lycorn;578191]I´ve finished TFing the 16M range to 70 bits. 113 new factors found. The whole range is up for grabs, 21032 to go.
Planning to move to 11.9M shortly. Still fiddling with 26.9, using Colab. Slow progress, as the availability of GPUs is not great.[/QUOTE] The ranges below 30M certainly can benefit from more TF but more of the factors from here on will be easier found via P1. Thanks 
The 26.9 range is free. I took some exponents to 71 bits, the rest is at 70, available for whoever feels like crunching them.

30M has less than 200,000 unfactored exponents!

[QUOTE=masser;578371]30M has less than 200,000 unfactored exponents![/QUOTE]
Yeah!!! 2 to go. 
[QUOTE=petrw1;578372]Yeah!!! 2 to go.[/QUOTE]
In the last 365 days, [CODE]10M 1,677 20M 2,326[/CODE] We need respectively 10k/8k more in those ranges (at all resolutions, 10M, 1M and 0.1M) 
[QUOTE=petrw1;575294]There are only 3 ranges remaining in the 4xMillions but they are stubborn.
42.6: 47 remaining 48.4: 49 49.6: 49 I have done aggressive P1 on these 3 ranges. Every exponent is P1'd to at least a 3.5% factor rate; many to a 5.25% factor rate. I estimate that to continue P1 would require about 500GhzDays per P1 factor. Therefore, I am suggesting that the preferred next step is TF with a nice GPU farm. They are currently factored to 74 bits. To complete these ranges will require full TF to 76 bits; then about half the exponents 77 bits. A total of about 2.5M GhzDays of TF. Thanks for everyone's help.[/QUOTE] I'll work on the 42.6 range. There was no TF activity in this range in the 12 last months. I can allocate +/ 2.500 GHz/day per day. 
[QUOTE=De Wandelaar;578384]I'll work on the 42.6 range. There was no TF activity in this range in the 12 last months. I can allocate +/ 2.500 GHz/day per day.[/QUOTE]
There are 2043 numbers in that range. TF from 7475 takes about 90 GHd, so you should be able to complete the range in 1011 weeks. Based on prior P1 done, you should find a factor every 115 numbers (1 every 4 days) or about 1519 for the whole range. 7576 will take double that time and find similar number of factors. BTW, 49.6 might be easier as the TF will be faster, and you're likely to find more factors as the P1 done is slightly less. 
[QUOTE=De Wandelaar;578384]I'll work on the 42.6 range. There was no TF activity in this range in the 12 last months. I can allocate +/ 2.500 GHz/day per day.[/QUOTE]
Thanks. I'm doing P1 on these 3 ranges for another month. hoping to get them all under 40 for you. We can do it concurrently. I'll focus on 42.6 first. I'll start  mostly 42.6 on the low end....if you work from the high end we won't step on each others toes. I'll be done in under 2 weeks. PS looking at recent results I can see that Anton Repko is dabbling in 49.6. 
1 Attachment(s)
[QUOTE=axn;578390]There are 2043 numbers in that range. TF from 7475 takes about 90 GHd, so you should be able to complete the range in 1011 weeks.
Based on prior P1 done, you should find a factor every 115 numbers (1 every 4 days) or about 1519 for the whole range. 7576 will take double that time and find similar number of factors. BTW, 49.6 might be easier as the TF will be faster, and you're likely to find more factors as the P1 done is slightly less.[/QUOTE] Thanks for your feedback, axn. I was aware of the duration of the process but honestly not of the impact of the agressive P1 on the success rate. I hope it will progress a little bit faster than foreseen : it takes actually about 45 min to process one case (see attach), so 3032 per day without too much interruptions. With a little bit luck 910 weeks could be enough. My fear is in fact a too hot summer ... 
[QUOTE=petrw1;578391]Thanks. I'm doing P1 on these 3 ranges for another month. hoping to get them all under 40 for you. We can do it concurrently. I'll focus on 42.6 first.
I'll start  mostly 42.6 on the low end....if you work from the high end we won't step on each others toes. I'll be done in under 2 weeks. PS looking at recent results I can see that Anton Repko is dabbling in 49.6.[/QUOTE] OK, Wayne. I was already started on the low end but I will stop and begin again from the high end. Yves 
[QUOTE=De Wandelaar;578395]I hope it will progress a little bit faster than foreseen : it takes actually about 45 min to process one case (see attach), so 3032 per day without too much interruptions. With a little bit luck 910 weeks could be enough.[/quote]
You're getting about 2800+ so, yes, it should finish faster [QUOTE=De Wandelaar;578395]My fear is in fact a too hot summer ...[/QUOTE] Typically, you should be able to substantially decrease power usage with only a modest decrease in performance by setting max clock / power limit. That should allow you to continue crunching thru a hot summer. 
[QUOTE=axn;578446]
Typically, you should be able to substantially decrease power usage with only a modest decrease in performance by setting max clock / power limit. That should allow you to continue crunching thru a hot summer.[/QUOTE] My power limit is already set on 60 % otherwise I could deliver more than 3.300 GHz/day (Quadro RTX 5000). I try to keep the GPU temperature between 65° en 70° (heat in the room, fan noise, lifespan, plus electricity). I think 54 % power limit is the absolute miminum I can set with Afterburner. Effectively, the performance decrease is quite modest (+/ 17 %) in comparison with the 40 % power reduction. 
You can further drop power usage by limiting max clock speed
For my 1660 Ti, I've set power limit to 70w. However, I'm running it with graphics clock capped at 1300 MHz, dropping the power to 44w (~40% drop) while dropping performance by only 20%. 
[QUOTE=axn;578476]You can further drop power usage by limiting max clock speed
For my 1660 Ti, I've set power limit to 70w. However, I'm running it with graphics clock capped at 1300 MHz, dropping the power to 44w (~40% drop) while dropping performance by only 20%.[/QUOTE] Indeed, I didn't think of that. Thanks for the tip ! 
Your first...keep it up!!!!
[QUOTE=De Wandelaar;578397]OK, Wayne. I was already started on the low end but I will stop and begin again from the high end.
Yves[/QUOTE] [CODE] Yves de Wandelaar Manual testing 42684091 F 20210524 02:53 0.0 45.3204 Factor: 26817704181153840529049 / TF: 7475*[/CODE] 
He found one on the 18th.
[url]https://www.mersenne.org/report_exponent/?exp_lo=42695969&full=1[/url] 
[QUOTE=axn;578969]He found one on the 18th.
[url]https://www.mersenne.org/report_exponent/?exp_lo=42695969&full=1[/url][/QUOTE] Yes, two factors found until now for 321 trials. The results are on the low side but yet reasonably coherent with axn's forecast. Hoping the rate will be a little bit higher in the next days/weeks. 
[QUOTE=axn;578969]He found one on the 18th.
[url]https://www.mersenne.org/report_exponent/?exp_lo=42695969&full=1[/url][/QUOTE] :redface: 
Ok, give me a range. Nice if you have a list, but if not, I will grab the bitlevels from PrimeNet myself. Or... is it ok if I try raising 31.9 to 75? It seems untouched for 30 days.

[QUOTE=LaurV;579142]Ok, give me a range. Nice if you have a list, but if not, I will grab the bitlevels from PrimeNet myself. Or... is it ok if I try raising 31.9 to 75? It seems untouched for 30 days.[/QUOTE]
If you want to do TF that would be great. I'm TFing in the 2x.xM ranges currently so the following are all up for grabs. All of these ranges will eventually need TF75. 35.3 35.1 34.4 33.3 32.7 31.9 31.2 30.8 30.5 The first 3 may need a little bit of TF76 after I P1 them harshly. I plan to have P1 done for these 3x.xM ranges by the end of the year. If you ever feel like helping with P1 let me know. Thanks, Grasshopper 
Raising a 31.9M to 75 takes 41 minutes on a V100 (colab) and [STRIKE]38[/STRIKE] 36 minutes on my local 2080Ti's (the "tits" are water cooled and a bit OC'd). So, if I get lucky, that would be about 12 days. Meantime I want to see how I can convince Chris to feed me with his 2629M range, which I see [URL="https://www.gpu72.com/reports/available/"]available[/URL]. If so, I will get rid of the "handwork", and I will switch to his feed instead. Let's see, hopefully we can find few factors, either way.
For P1, what ranges do you have in mind, and what percents of chances, assuming I can find the hardware and mood? (Just asking) Edit, ok, Chris "convinced", it seems it is as easy as going to your notebook access keys table and select "DC Already Done" in the second column, than 26M to 71 and 72 start coming. For the local, we will kick Misfit. We are set. 
[QUOTE=LaurV;579149]Raising a 31.9M to 75 takes 41 minutes on a V100 (colab) and 38 minutes on my local 2080Ti's (the "tits" are water cooled and a bit OC'd). So, if I get lucky, that would be about 12 days. Meantime I want to see how I can convince Chris to feed me with his 2629M range, which I see [URL="https://www.gpu72.com/reports/available/"]available[/URL]. If so, I will get rid of the "handwork", and I will switch to his feed instead. Let's see, hopefully we can find few factors, either way.
For P1, what ranges do you have in mind, and what percents of chances, assuming I can find the hardware and mood? (Just asking) Edit, ok, Chris "convinced", it seems it is as easy as going to your notebook access keys table and select "DC Already Done" in the second column, than 26M to 71 and 72 start coming. For the local, we will kick Misfit. We are set.[/QUOTE] Thanks. I assume you are using the 2047 classes version of mfaktc? Your choice whether you work on 31.9 or what Chris can feed you. If you feel compelled to P1 I'm doing 4.5 to 5.2% (+3.2 to +3.8%) in the 3x.xM. I'm using B1/B2 of 1M/30M to 1.5M/45M. 
[QUOTE=LaurV;579149]Edit, ok, Chris "convinced", it seems it is as easy as going to your notebook access keys table and select "DC Already Done" in the second column, than 26M to 71 and 72 start coming. For the local, we will kick Misfit. We are set.[/QUOTE]
You never do anything only a little bit, do you... :wink: I see three machines using the GPU72 Colab API. Two without trickery; I'll have to drill down on how you injected the 31.9M to 75 work... And how to get the results to autosubmit without an AID... :tu: 
[QUOTE=chalsall;579153]You never do anything only a little bit, do you... :wink:
I see three machines using the GPU72 Colab API. Two without trickery; I'll have to drill down on how you injected the 31.9M to 75 work... And how to get the results to autosubmit without an AID... :tu:[/QUOTE] No trickery. I added the 31.90M manually in the worktodo.txt in colab folder, for a test spin. The intention was to add all the 31.9xM little by little, so Gugu could do the work and report the results through you, and I get rid of the headache with reporting. But I quit doing that when I found out that the things not assigned through you (no assignment keys) are not sent out to PrimeNet. So, I switched to your feed and I stayed with your range**. The colab instance finishes the worktodo and reports the results to you, but they are stuck there. I didn't comment on it, because I thought you may have a reason for it. There are advantages and disadvantages both ways. If up to me, you should send all "text" coming to your lobby to PrimeNet without parsing it. It shouldn't be your problem what I send to the server through you (unless, of course, is child pornography :razz:). Once I have seen that they are stuck, I gave up and queued them locally with Misfit (I mean, the 31.9[COLOR=Red][B][U]0[/U][/B][/COLOR]M, i.e. the first 200 first), but it will take much longer for all the range, like about 30 days instead of 12, and I don't know if I will have the patience. Latins use to say, "una hirundo non facit ver" (one swallow doesn't make a spring), to which I would add "modo duo?" (how abut two?). [URL="https://www.mersenne.org/report_exponent/?exp_lo=26769991&full=1"]swallow[/URL], [URL="https://www.mersenne.org/report_exponent/?exp_lo=26775113&full=1"]swallow[/URL] (haha, no pun intended)  (** the range you assign seems luckier than 31M, hehe, but the real reason is that 26.7M to 72 only takes under 7 minutes per assignment, so 6 of them can be done in the same time one 31.9 to 75 could be done, about) Edit: the 31.9M results stuck in your lobby can be deleted. I reported them manually (copy/paste from your lobby to manual results page). But honestly, you should make the script to report all in the lobby to the server, without parsing keys. If somebody put scrap there, i.e. not TF results, and not primerelated, etc, there are other ways to deal with it (like block the account, ban the user, kick him in the nose, put a finger into his eye, etc). 
[URL="https://www.mersenne.org/report_exponent/?exp_lo=31905917&full=1"]swallow[/URL].
:davar55: :razz: 
Currently working in the 15M range, 69 > 70 bits.
Should take a couple of weeks. If someone wants to do some work there, please start from 15.0 so as to avoid any toe stepping (I´m now crunching 15.5 and 15.6 sub ranges and will work my way down). It would be a good thing that Primenet allowed us to reserve exponents for TF in these (say, below 20M or so) ranges. 
[QUOTE=De Wandelaar;578397]OK, Wayne. I was already started on the low end but I will stop and begin again from the high end.
Yves[/QUOTE] It's all yours now...I've done all the P1 that is reasonable Thanks again By about midJune I will have also completed all reasonable P1 for 48.4 and 49.6. I am hoping to have both ranges down to 40 or less remaining. The only options will then be TF ... hopefully to 76 bits will do it. Or ECM or P+1 though neither are very efficient for exponents this high Thanks all 
[QUOTE=petrw1;579616]It's all yours now...I've done all the P1 that is reasonable
Thanks again By about midJune I will have also completed all reasonable P1 for 48.4 and 49.6. I am hoping to have both ranges down to 40 or less remaining. The only options will then be TF ... hopefully to 76 bits will do it. Or ECM or P+1 though neither are very efficient for exponents this high Thanks all[/QUOTE] 28 % of the 42.6 range (74>75 bits) done since 14/05. It should be completed about 20/07. Until now, 5 factors found, 2030 unfactored remaining. 
[QUOTE=petrw1;464177]Breaking it down I'm thinking if each 100M range has less than 2M unfactored we have the desired end result.
Similarly if each 10M range has less than 200K unfactored... or each 1M range has less than 20K unfactored... or each 100K range has less than 2,000 unfactored. [/QUOTE] So, why to stop there? :razz: James' site allows x.xxM ranges for smaller exponents (and only xx.x for larger expos). So, following the idea, every x.xxM range (i.e. 10k) should have less than 200 unfactored expos. Arriba, arriba, I went to [URL="https://www.mersenne.ca/status/tf/0/0/5/0"]the site[/URL], as deep as possible, until the green row became deeppink row, then moved upward (or rightwards) arrow by arrow. The first outlier was [URL="https://www.mersenne.ca/status/tf/0/0/5/170"]1.71M[/URL], with 201 unfactored, then on the second position came [URL="https://www.mersenne.ca/status/tf/0/0/5/180"]1.89M[/URL] with 200 unfactored, so the goal is to find two factors in the first bucket, and one in the second. We raised both ranges one or two bits with TF without luck, then, after a little [URL="https://www.mersenneforum.org/showthread.php?p=579399"]help from the forum[/URL], started playing with P+1 on them ranges. We are [URL="https://www.mersenne.org/report_exponent/?exp_lo=1891277&full=1"]DONE[/URL] with the second half of it. :party: We are still continuing P+1 with the first half. Meantime, we also found like a lot of "swallows" (see above) in 26M range served by Chris, like over 30 factors, from which more than half with colab instances. Related to your wish that we move to P1, we may try to pursue Chris to serve us P1 assignments for our colab instances (and we will take care of the colab side). Maybe that's a good idea, we get rid of the headache with reserving the work, adding it to colab, and reporting the results (right now, manually). 
[QUOTE=LaurV;579788]Related to your wish that we move to P1, we may try to pursue Chris to serve us P1 assignments for our colab instances (and we will take care of the colab side). Maybe that's a good idea, we get rid of the headache with reserving the work, adding it to colab, and reporting the results (right now, manually).[/QUOTE]
Doable. :chalsall: I've actually been using my fourteen (14#) (CPU only) Colab instances to clean up after those who complete an FTC without first doing a P1. Sometimes preemptively; in 103M for example... Let's talk about this over the weekend. Super busy at the moment. 
Happy to join with 10 Colab sessions but will need some guidance.

[QUOTE=pinhodecarlos;579806]Happy to join with 10 Colab sessions but will need some guidance.[/QUOTE]
Copy. Thanks. :tu: 
[QUOTE=LaurV;579788]So, why to stop there? :razz:
James' site allows x.xxM ranges for smaller exponents (and only xx.x for larger expos). So, following the idea, every x.xxM range (i.e. 10k) should have less than 200 unfactored expos. [/QUOTE] I'd have to look back through this thread but someone asked the same question a few years ago. The problem is that at this fine of a breakdown there are some serious outliers and so few exponents to work with. Not even trying I found for example (there are worse): 26.70M 232 Unfactored (33 to get under 200). Each bit level of TF will find about 2 factors with only 232 exponents. So via TF only that is 16 more bit levels....with luck maybe only 14. So that is TF 7387. That is almost 1,200,000 GhzDays per exponent; 8 months with my 2080Ti per exponent. Assuming we agree that is beyond reasonable we need to find some factors via P1. Lets say we want to save 7 bits of TF so we need about 14 P1 factors from these 232 exponents. We need about a 6% increase in current P1; so we will need to run P1 with a 10.5% expected success rate to be safe. That is 100GhzDays per exponent x 232 = 23,200 to get these 14 factors. The better part of a year for a reasonable PC. However there is another issue. TF and P1 can find the same factors so the more P1 that is done the lower the chance of finding a TF factor and vice versa. If we were to find these 14 P1 factors then the TF success rate will drop; or in other words it will not save 7 levels of TF. OKAY too much blabbering; you see where I'm going.  That all said maybe once the sub2000 project is complete the hardware will allow us to reconsider sub200. Thanks for your interest. 
[QUOTE=petrw1;579841]
The problem is that at this fine of a breakdown there are some serious outliers and so few exponents to work with.[/QUOTE] Of course you are right, and the fact that it gets more and more difficult as you get deeper into the mud is plain clear. To get under 20M for all Gimps range is the easiest. Then, zoom in and find the outliers, magnitude by magnitude. I am still fighting with 1.71M for a while, but I am not going to do that forever, I also have a life :blush: But I couldn't stop boasting about the P+1 success in 1.89M and about finding over 30 factors in 26M. Right now, they are more, as I moved to 27M and found 5 factors there today too. 
yarrr :chappy:
This almost passed unobseved: [CODE]1717043 FPP1 Start=2/7, B1=5000000, B2=365000000, Factor: 36120234091485938570203343[/CODE] One more to go (for which, I am going to raise all 200 candidates one bitlevel, maybe get lucky  and stop if I get lucky). 
34m is done
The last two ranges 3.7 & 3.9 were done today, thus completing 34m.
I had some TF help from anonymous benefactor which greatly accelerated 3.9 range. Once the pending P1s from 3.7/3.9 ranges are completed (few days to couple of weeks), I'll move over to 4.1 range. I've been prepping that range with 6870 TF. 
Is anybody doing P1 in [URL="https://www.mersenne.ca/status/tf/0/7/5/3510"]this range[/URL]?
I found about 15 factors in the last 4 days, but only 11 of them are reflected in the table, therefore the 12th is from somebody else (I don't know which 12th). If that's a TF/P+1/ECM factor, no harm, but if that's a P1, then we are duplicating the efforts. I started when 2076 candidates were left and (after a discussion with Wayne on PM) I calculated my B1 and B2 to have a 100% theoretic chance of finding 77 factors, considering an average of the TF and P1 already done. Then, I "rounded up" them to "look nice", i.e. being prime, and containing a sequence of interesting primes too, as a substring :razz: that's how I ended with B1=2329319 (sorry, 313 was not prime), and B2=61891103 (107x was not prime, and 1103 is a factor of M29!). For who's asking. These are nice, round numbers. Don't tell me that numerology is not catchy! Joking apart, with the default TF of the range the two limits would have a chance to get a factor in 10 trials, but considering that the range was [URL="https://www.mersenne.ca/status/tf/0/0/5/3510"]overfactored, to 74 bits[/URL], the chance is just 1 factor in [URL="https://www.mersenne.ca/prob.php?exponent=35500007&factorbits=74&b1=2329319&b2=61891103"]about 17 or 18 trials[/URL]. However, the range had average P1 done too, which "pulled out the low hanging fruits", so the real chance is somewhere at 1 in 23 to 1 in 26 (depending on the P1 already done for each, I didn't calculate exact, only took an "eye average"). This would cover for the 77 factors (which, in 2076, means 1 in about 26.9, plus the luck :razz:). Up to now, it fits, with 77 factors to find I should find about 7.7 in every 35.1x range, and I have 15 in the first 2 ranges, plus a bit. 
[CODE]35130749 227470147005389851276513 20210610 Sid & Andy FPM1 B1=1500000, B2=45000000[/CODE]

[QUOTE=LaurV;580767]However, the range had average P1 done too, which "pulled out the low hanging fruits", so the real chance is somewhere at 1 in 23 to 1 in 26 (depending on the P1 already done for each, I didn't calculate exact, only took an "eye average").[/QUOTE]
The real chance is more like 1 in 36, so you might find something like 60 factors. You could fall short by 15 factors in clearing this range with this P1. 
Oh. Then I should either raise the limits, or save the checkpoint files to avoid work duplication in the future if me or somebody else will wish to extend the limits. BTW, can gpuOwl extend B1? (I remember getting errors like "wrong B1, using the one from saved checkpoint" or so).

Well, if you want to live on the edge, once you're done with P1, doing a 7475 TF on the survivors might _just_ be enough to clear the range.
But, if you want to clean this out with just P1, you need to increase probability by 1% point more (so, instead of 5.67%, target 6.6%). 
All times are UTC. The time now is 23:34. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2023, Jelsoft Enterprises Ltd.