mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Data (https://www.mersenneforum.org/forumdisplay.php?f=21)
-   -   COMPLETE!!!! Thinking out loud about getting under 20M unfactored exponents (https://www.mersenneforum.org/showthread.php?t=22476)

rainchill 2020-12-30 16:29

I still prefer that Gerbicz error checks do not occur every 1 million iterations polluting the worker window. Perhaps it should scale so the larger the exponent the less frequently it runs like every 10 million iterations or no output for no error.

Also - up to date ETA in the worker title bar windows would be nice.

ATH 2020-12-30 19:27

[QUOTE=rainchill;567741]I still prefer that Gerbicz error checks do not occur every 1 million iterations polluting the worker window. Perhaps it should scale so the larger the exponent the less frequently it runs like every 10 million iterations or no output for no error.[/QUOTE]

You can change it in prime.txt, from undoc.txt:

[QUOTE]When doing highly-reliable error checking, the interval between compares can be
controlled with these two settings in prime.txt:
[B] PRPGerbiczCompareInterval=n (default is 1000000)
PRPDoublecheckCompareInterval=n (default is 100000)[/B]
Reducing the interval will reduce how far the program "rolls back" when an error
is detected. It will also increase the overhead associated with error-checking.
NOTE: For technical reasons, PRPGerbiczCompareInterval is rounded to the nearest perfect square.
ALSO NOTE: The program automatically adjusts the Gerbicz interval downward when an error is
detected. This will reduce the amount of time "lost" rolling back to the last verified good
iteration after an error is detected. Over time, the Gerbicz interval will be adjusted back
upward after many successful compares.[/QUOTE]

petrw1 2020-12-30 23:45

Kaboom!!!!
 
1 Attachment(s)
Ok maybe that's a little dramatic but I did crash 30.4 doing P-1; albeit an unlikely scenario.

I choose 6 exponents that I had previously found a factor via 29.8 P-1 to see if 30.4 would find them as well.

I hand picked B1 just slightly higher than necessary.
If B2 was less than 40xB1, I left it be determined; if B2 needed to be MUCH higher I set it; leaving off the last parm (Factor Bits).

Case 1: 41,711,611 (I failed Grade 3 math)
[url]https://www.mersenne.ca/exponent/41711611[/url]
Current required B1/B2 34171 / 4514869.(B2=133xB1)
However, I erroneously allowed 30.4 to choose B2.
When Stage 1 finished and B2 was chosen I realized it was too small and stopped it very early in Stage 2.
I edited worktodo and changed B2=4600000 and removed Factor Bits.

As soon as it seemed Stage 2 should be done based on progress reports this happened and then Prime95 crashed and went away. (See attachment)
I had to use my camera as I couldn't get a Copy Window to work at crash time.

I started it again and it restarted at the 14% mark and crashed the same way.

So I renamed the P1SaveFile to make it restart.
This time it finished but Stage 2 was much slower.

I surmised this indicates the first time it did NOT use my changed B2...but realized at the end that something was amiss and appeared to start Stage 2 again (Stage 2 init complete message) and crashed.

Successful run:

[CODE]
[Dec 30 16:11] Worker starting
[Dec 30 16:11] Setting affinity to run worker on CPU core #1
[Dec 30 16:11]
[Dec 30 16:11] P-1 on M41711611 with B1=35000, B2=4600000
[Dec 30 16:11] Using AVX FFT length 2240K, Pass1=448, Pass2=5K, clm=4, 2 threads
[Dec 30 16:11] Setting affinity to run helper thread 1 on CPU core #2
[Dec 30 16:15] M41711611 stage 1 is 41.446% complete. Time: 223461.917 ms.
[Dec 30 16:18] M41711611 stage 1 is 82.976% complete. Time: 197321.365 ms.
[Dec 30 16:20] M41711611 stage 1 complete. 101268 transforms. Time: 504207.464 ms.
[Dec 30 16:20] Stage 1 GCD complete. Time: 13102.162 ms.
[Dec 30 16:20] Available memory is 2006MB.
[Dec 30 16:20] D: 210, relative primes: 108, stage 2 primes: 318709, pair%=76.20
[Dec 30 16:20] Using 1994MB of memory.
[Dec 30 16:20] Stage 2 init complete. 1280 transforms. Time: 9618.672 ms.
[Dec 30 16:25] M41711611 stage 2 is 8.641% complete. Time: 290833.989 ms.
[Dec 30 16:30] M41711611 stage 2 is 17.402% complete. Time: 289398.635 ms.
[Dec 30 16:35] M41711611 stage 2 is 26.255% complete. Time: 290844.452 ms.
[Dec 30 16:39] M41711611 stage 2 is 35.219% complete. Time: 287364.578 ms.
[Dec 30 16:44] M41711611 stage 2 is 44.213% complete. Time: 290137.708 ms.
[Dec 30 16:49] M41711611 stage 2 is 53.246% complete. Time: 292558.611 ms.
[Dec 30 16:54] M41711611 stage 2 is 62.209% complete. Time: 272009.590 ms.
[Dec 30 16:59] M41711611 stage 2 is 71.055% complete. Time: 291066.751 ms.
[Dec 30 17:03] M41711611 stage 2 is 79.909% complete. Time: 292442.674 ms.
[Dec 30 17:08] M41711611 stage 2 is 88.699% complete. Time: 293089.292 ms.
[Dec 30 17:13] M41711611 stage 2 is 97.439% complete. Time: 290943.709 ms.
[Dec 30 17:15] M41711611 stage 2 complete. 474230 transforms. Time: 3265839.830 ms.
[Dec 30 17:15] Stage 2 GCD complete. Time: 13100.147 ms.
[Dec 30 17:15] P-1 found a factor in stage #2, B1=35000, B2=4600000.
[Dec 30 17:15] M41711611 has a factor: 244626322402475529262488198689 (P-1, B1=35000, B2=4600000)
[Dec 30 17:15]
[Dec 30 17:15] P-1 on M43028441 with B1=50000, B2=TBD
[Dec 30 17:15] Setting affinity to run helper thread 1 on CPU core #2
[Dec 30 17:15] Using AVX FFT length 2240K, Pass1=448, Pass2=5K, clm=4, 2 threads[/CODE]

petrw1 2020-12-31 02:13

On the plus side
 
Seems stage 2 is almost twice as fast.

To duplicated my current P-1 work; (41.7M; B1=1000000,B2=20000000) I had to use the override to force 30.4 to use my bounds; that is leave off the Factor Bits Parm at the end.

Pminus1=N/A,1,2,41781227,-1,1000000,20000000

Running with 2 Workers x 2 Cores on a i5-3570K OC'd to 4.2 with 4GB allocated.
As assignment like the above was taking about 9 hours.
I don't have the exact breakdown bit was something like 3 & 6 hours for stages 1 and 2;
maybe 3.5 & 5.5 hours.

With 30.4 this is taking about 3.5 and 3 hours for a total of 6.5.

Woot Woot

James Heinrich 2020-12-31 02:36

[QUOTE=petrw1;567792]Seems stage 2 is almost twice as fast.[/QUOTE]For the same bounds. As I understand it, part of that speedup involves omitting Brent-Suyama. Since stage2 is faster, higher bounds can be selected for a similar runtime and increased chance of factor.

axn 2020-12-31 03:16

[QUOTE=James Heinrich;567793]As I understand it, part of that speedup involves omitting Brent-Suyama. [/QUOTE]
Omitting B-S simplifies the code somewhat, but yields only minor speedup. The main speed up comes from better prime pairing. Previously, the prime pairing was like 10-15%. Now it is more like 85-95%. I don't have much insight into how this is achieved, but I'm guessing the increased number of temps (much higher than the relprimes(D)) is somehow involved.

I am getting about 1.5-1.6x speedup in stage 2. Admittedly, doing 6 P-1 in parallel might be reducing some efficiency gains.

While on the topic of prime pairing, a question to George. Is it worth it to not handle unpaired primes at all? Say, if a stage 2 prime is unpaired and > 0.75 B2, then just don't use it. The "> 0.75 B2" condition is to minimise the loss of probability (smaller primes being more likely to be useful than larger ones).

You could even compensate for this by increasing B2 until enough additional paired primes are added back to the pool. I mean, there is nothing sacrosanct about "all the primes < B2".

petrw1 2020-12-31 05:56

Is a roundoff error ok?
 
[Dec 30 17:46] P-1 on M43012451 with B1=815000, B2=TBD
[Dec 30 17:46] Setting affinity to run helper thread 1 on CPU core #2
[Dec 30 17:46] Using AVX FFT length 2240K, Pass1=448, Pass2=5K, clm=4, 2 threads
[Dec 30 20:54] M43012451 stage 1 complete. 2351934 transforms. Time: 11283968.671 ms.
[Dec 30 20:54] Stage 1 GCD complete. Time: 12759.703 ms.
[Dec 30 20:54] Available memory is 3977MB.
[Dec 30 20:54] With trial factoring done to 2^74, optimal B2 is 46*B1 = 37490000. If no prior P-1, chance of a new factor is 4.7%
[Dec 30 20:54] D: 210, relative primes: 219, stage 2 primes: 2224734, pair%=84.02
[Dec 30 20:54] Using 3966MB of memory.
[Dec 30 20:54] Stage 2 init complete. 2391 transforms. Time: 23210.622 ms.
[Dec 30 21:43] M43012451 stage 2 is 12.802% complete. Time: 2913325.276 ms.
[Dec 30 22:06] Restarting worker with new memory settings.
[Dec 30 22:06]
[Dec 30 22:06] P-1 on M43012451 with B1=815000, B2=TBD
[Dec 30 22:06] Using AVX FFT length 2240K, Pass1=448, Pass2=5K, clm=4, 2 threads
[Dec 30 22:06] Setting affinity to run helper thread 1 on CPU core #2
[Dec 30 22:06] Resuming P-1 in stage 2 with B2=37490000
[Dec 30 22:06] Available memory is 2006MB.
[Dec 30 22:06] D: 210, relative primes: 108, stage 2 primes: 1812724, pair%=70.58
[Dec 30 22:06] Using 1994MB of memory.
[Dec 30 22:06] Stage 2 init complete. 1324 transforms. Time: 11045.366 ms.
[Dec 30 22:06] M43012451 stage 2 is 37.871% complete.
[Dec 30 22:55] M43012451 stage 2 is 49.781% complete. Time: 2918571.851 ms.
[Dec 30 23:32] [B][U]Possible roundoff error (0.5)[/U][/B], backtracking to last save file.
[Dec 30 23:32] Setting affinity to run helper thread 1 on CPU core #2
[Dec 30 23:32] Using AVX FFT length 2240K, Pass1=448, Pass2=5K, clm=4, 2 threads
[Dec 30 23:32] Resuming P-1 in stage 2 with B2=37490000
[Dec 30 23:32] Available memory is 2006MB.
[Dec 30 23:32] Using 1994MB of memory.
[Dec 30 23:32] Stage 2 init complete. 1385 transforms. Time: 9182.739 ms.
[Dec 30 23:32] M43012451 stage 2 is 53.929% complete.

Prime95 2020-12-31 08:09

[QUOTE=petrw1;567786]Ok maybe that's a little dramatic but I did crash 30.4 [/QUOTE]

Excellent. I'll have a fix soon.

[QUOTE=axn;567795]Omitting B-S simplifies the code somewhat, but yields only minor speedup. The main speed up comes from better prime pairing. Previously, the prime pairing was like 10-15%. Now it is more like 85-95%. I don't have much insight into how this is achieved, but I'm guessing the increased number of temps (much higher than the relprimes(D)) is somehow involved.[/QUOTE]

The idea came from Mihai Preda. If D=30=2*3*5, the four relative primes 1,7,11,13 can cover a particular multiple of D between B1 and B2. If both Dmultiple - relprime and Dmultiple + relprime are prime then a pairing occurs requiring half the work.

In the old code if we had more memory we would change D to 210=2*3*5*7 which increased speed two ways. One, it is faster to step from B1 to B2 by 210 rather than 30. Two, the prime pairing chances go up a little.

Mihai's idea is instead of using extra memory to increase D, we us more than the minimum relative primes. For D=30, if we allocate 8 relative primes then each prime between B1 and B2 can be represented by two different Dmultiples +/- relprime. Prime95 now has two chances to pair a prime instead of one. We lose some speed by stepping by a smaller increment, but gain much more by better pairing.

I'm not a fan of your idea to leave out the larger unpaired primes. The idea works, but it just seems "untidy".

nordi 2020-12-31 14:20

The new code seems to have a race condition during startup that segfaults mprime roughly 1 out of 10 times on my system. I have
[LIST][*]32 workers, configured to run [I]ECM2=N/A,1,2,1277,-1,100000000,1000000000,1,[/I][*]StaggerStarts=0 in prime.txt to make all threads launch at about the same time[*]"rm e*" before each start to remove the old state[/LIST]When the segfault happens, it is immediately after starting mprime. The last thing to be printed on the screen is

[Worker #32 Dec 31 15:09] Setting affinity to run worker on CPU core #16

Kernel log shows

mprime[22843]: segfault at 10 ip 00007f3c4a9cae40 sp 00007f3c3d99c7e8 error 4 in libpthread-2.26.so[7f3c4a9c0000+19000]


I tried the same setup with version 29.8. In 40 runs, it did not segfault a single time.

nordi 2020-12-31 14:37

While investigating the segfault issue, I found that when doing

ECM2=N/A,1,2,11,-1,100000000,1000000000,1

the factor is found and then mprime goes to waiting mode. When I stop it with CTRL+C, I get a lot of

[Main thread Dec 31 15:22] In write_gwnum, unexpected len == 0 failure

messages. Version 29.8 does not print this error.

However, while trying that out I also segfaulted version 29.8 a few times, so maybe the race condition is not in the new code after all.



And all this because I wanted to run some more benchmarks :lol:

nordi 2020-12-31 16:01

I finally got the benchmarks done. For

ECM2=N/A,1,2,1619,-1,10000000,100000000,1,

which uses FMA3 FFT length 96 on my Ryzen 3950X with 16 cores, I had these results:

16 workers:
[LIST][*]version 29.8: 44 seconds[*]version 30.4b3: 46 seconds[/LIST]32 workers:
[LIST][*]version 29.8: 62 seconds[*]version 30.4b3: 65 seconds[/LIST]So the new version is slightly slower for step 1 in this setup. Also, hyperthreading gives ~40% faster stage 1 in both versions.

masser 2020-12-31 20:02

[QUOTE=petrw1;567792]
Woot Woot[/QUOTE]

Indeed. Version 30.4 will shave [B]months[/B] off of my effort to have less than 2000 unfactored exponents in the 14.0M range. Thanks, George!

It took me a little while to get good apples-to-apples comparisons. Note that with v. 30.4 I have flexibility to improve both the factoring odds and the runtimes. Here are my benchmarks:
[B]
First Machine[/B]: i5-4690s, with 32 GB 1600 Mhz DDR4 RAM (only 7GB allocated to mprime)

[B]PM1[/B]
Exponent, B1, B2, runtime_30.3(min:sec), runtime_30.4(min:sec)
72713617, 104771, 2619275, 70:32, 49:50
95675581, 114357, 3316353, 111:55, 78:35
102001051, 37123, 928075, 39:46, 28:04

[B]ECM[/B]
Exponent, NumCurves, B1, B2, runtime_30.3(min:sec), runtime_30.4(min:sec)
4312787, 5, 50000, 6650000, 70:50, 46:25
5094979, 6, 50000, 6350000, 81:19, 61:00

[B]14.0M factoring tasks[/B] (times are hour:min:sec)
ECM: 6 t25 (B1=50k,B2=100B1+1) curves; with 30.3: 5:17:13; with 30.4: 4:05:26
P-1: B1=5M, B2=135M; with 30.3: 7:47:37; with 30.4: 4:40:50


[B]Second Machine[/B]: i7-6700, with 8 GB 2400 Mhz DDR4 RAM (only 3GB allocated to mprime)

[B]PM1[/B]
Exponent, B1, B2, runtime_30.3(min:sec), runtime_30.4(min:sec)
50077721, 280000, 280000, 25:47, 26:20 <---- notice this is a stage one only run
72713617, 104771, 2304962, 42:32, 32:57
95675581, 114357, 2973282, 68:48, 53:48
102001051, 37123, 816706, 24:51, 19:53

[B]ECM[/B]
Exponent, NumCurves, B1, B2, runtime_30.3(min:sec), runtime_30.4(min:sec)
4312787, 5, 50000, 5250000, 42:03, 27:31
5094979, 6, 50000, 5450000, 53:01, 41:48

[B]14.0M factoring tasks[/B] (times are hour:min:sec)
ECM: 6 t25 (B1=50k,B2=100B1+1) curves; with 30.3: 3:21:26; with 30.4: 2:37:30
P-1: B1=7M, B2=210M; with 30.3: 6:59:36; with 30.4: 5:05:28

Prime95 2020-12-31 23:09

[QUOTE=nordi;567846]I finally got the benchmarks done. For
ECM2=N/A,1,2,1619,-1,10000000,100000000,1,
which uses FMA3 FFT length 96 on my Ryzen 3950X with 16 cores, I had these results:
<snip>
So the new version is slightly slower for step 1 in this setup.[/QUOTE]

In 30.4 try reducing PracSearch. Here is the explanation from the next build's undoc.txt:

The ECM stage 1, the program examines several different Lucas-chains looking for the shortest.
For ECM on very small numbers, it may be beneficial to reduce the search effort as the
work saved is pretty small. For ECM on larger numbers, it might pay to increase the search
effort. I have not studied the optimal search effort, so the current default of 7 is a
complete guess. To change the search effort, add this to prime.txt:
PracSearch=n (default is 7)
Values from 1 to 50 are supported.

petrw1 2021-01-01 03:59

Looks like 30.4 Missed a P1 Factor
 
Interestingly this is the same exponent I noted in the RoundOff error a few posts back.
As well my PrimeNet appeared to have crashed last night. It was not running this AM.
But when I restarted it and it started at P2 0.00% and completed without crashing I assumed all would be well until I noticed it did NOT report a Factor.

This says with the bounds used below it should have been found.
[url]https://www.mersenne.ca/exponent/43012451[/url]

I'm going to run it again with the same parms and then once more with specific B1/B2.

[CODE][Main thread Dec 31 10:12] Mersenne number primality test program version 30.4
[Main thread Dec 31 10:12] Optimizing for CPU architecture: Core i3/i5/i7, L2 cache size: 4x256 KB, L3 cache size: 6 MB
[Main thread Dec 31 10:12] Starting workers.
[Comm thread Dec 31 17:35] Sending result to server: UID: petrw1/Rocky, M43012451 completed P-1, B1=815000, B2=81500000, Wi4: 5C397974
[Comm thread Dec 31 17:35]
[Comm thread Dec 31 17:35] PrimeNet error 40: No assignment
[Comm thread Dec 31 17:35] P-1 result for M43012451 was not needed
[Comm thread Dec 31 17:35] Done communicating with server.[/CODE]

[CODE][Dec 31 10:12] Worker starting
[Dec 31 10:12] Setting affinity to run worker on CPU core #1
[Dec 31 10:12]
[Dec 31 10:12] P-1 on M43012451 with B1=815000, B2=TBD
[Dec 31 10:12] Using AVX FFT length 2240K, Pass1=448, Pass2=5K, clm=4, 2 threads
[Dec 31 10:12] Setting affinity to run helper thread 1 on CPU core #2
[Dec 31 10:12] Available memory is 2000MB.
[Dec 31 10:12] D: 210, relative primes: 108, stage 2 primes: 2462241, pair%=59.97
[Dec 31 10:12] Using 1994MB of memory.
[Dec 31 10:13] Stage 2 init complete. 1332 transforms. Time: 39860.484 ms.
[Dec 31 10:13] M43012451 stage 2 is 0.000% complete.
[Dec 31 17:35] M43012451 stage 2 complete. 4286150 transforms. Time: 26552061.456 ms.
[Dec 31 17:35] Stage 2 GCD complete. Time: 12513.374 ms.
[Dec 31 17:35] M43012451 completed P-1, B1=815000, B2=81500000, Wi4: 5C397974
[/CODE]

petrw1 2021-01-01 16:36

Worked this time....
 
[QUOTE=petrw1;567921]Interestingly this is the same exponent I noted in the RoundOff error a few posts back.

I'm going to run it again with the same parms and then once more with specific B1/B2.
Same parms rerun found the factor this time,
Second rerun with specific B1/B2 was ignored.
[/QUOTE]

What I didn't notice when I posted that error yesterday it lists a B2=815000000=100xB1.
But when that same run had the roundoff error and then crashed overnight it had a B2=37490000=46xB1.
Seems after the crash it changed the B2 itself and in a confused state missed the factor.

On the rerun

[CODE][Main thread Dec 31 22:04] Mersenne number primality test program version 30.4
[Main thread Dec 31 22:04] Optimizing for CPU architecture: Core i3/i5/i7, L2 cache size: 4x256 KB, L3 cache size: 6 MB
[Comm thread Jan 1 06:07] Sending result to server: UID: petrw1/Rocky, M43012451 has a factor: 772533645156306046237663 (P-1, B1=815000, B2=33415000)
[Comm thread Jan 1 06:07]
[Comm thread Jan 1 06:07] PrimeNet error 40: No assignment
[Comm thread Jan 1 06:07] Factoring result for M43012451 was not needed
[Comm thread Jan 1 06:07] Done communicating with server.
[/CODE]

[CODE][Dec 31 22:04] Worker starting
[Dec 31 22:04] Setting affinity to run worker on CPU core #1
[Dec 31 22:04]
[Dec 31 22:04] P-1 on M43012451 with B1=815000, B2=TBD
[Dec 31 22:04] Using AVX FFT length 2240K, Pass1=448, Pass2=5K, clm=4, 2 threads
[Dec 31 22:04] Setting affinity to run helper thread 1 on CPU core #2
[Jan 1 00:45] M43012451 stage 1 complete. 2351934 transforms. Time: 9604623.566 ms.
[Jan 1 00:45] Stage 1 GCD complete. Time: 12391.663 ms.
[Jan 1 00:45] Available memory is 2000MB.
[Jan 1 00:45] With trial factoring done to 2^74, optimal B2 is 41*B1 = 33415000. If no prior P-1, chance of a new factor is 4.61%
[Jan 1 00:45] D: 210, relative primes: 108, stage 2 primes: 1990631, pair%=69.23
[Jan 1 00:45] Using 1994MB of memory.
[Jan 1 00:45] Stage 2 init complete. 1296 transforms. Time: 12671.548 ms.
[Jan 1 06:07] M43012451 stage 2 complete. 3181698 transforms. Time: 19318016.339 ms.
[Jan 1 06:07] Stage 2 GCD complete. Time: 11868.497 ms.
[Jan 1 06:07] P-1 found a factor in stage #2, B1=815000, B2=33415000.
[Jan 1 06:07] M43012451 has a factor: 772533645156306046237663 (P-1, B1=815000, B2=33415000)
[Jan 1 06:07]
[Jan 1 06:07] P-1 on M43012451 with B1=815000, B2=17300000
[Jan 1 06:07] Setting affinity to run helper thread 1 on CPU core #2
[Jan 1 06:07] Using AVX FFT length 2240K, Pass1=448, Pass2=5K, clm=4, 2 threads
[Jan 1 06:07] M43012451 already tested to B1=815000 and B2=33415000.[/CODE]

petrw1 2021-01-01 17:30

BTW
 
I wouldn't rule out hardware problems on my side.
This PC has a reputation of freezing or crashing a few times a year.

masser 2021-01-01 17:38

I confirmed that mprime finds that factor with a clean run on one of my systems:

[CODE][Fri Jan 1 10:21:21 2021]
P-1 found a factor in stage #2, B1=1000000, B2=18000000.
M43012451 has a factor: 772533645156306046237663 (P-1, B1=1000000, B2=18000000)[/CODE]

Prime95 2021-01-01 20:42

[QUOTE=masser;567959]I confirmed that mprime finds that factor with a clean run on one of my systems:[/QUOTE]

Careful reading of the original bug report show several "non-standard" things that must be investigated.
1) Uses AVX 2240K FFT. That's near an FFT limit and AVX is likely less tested.
2) Early in stage 2 there was a restart due to mem change.
3) The roundoff error and backtracking.
I need to look at all 3 to see if I can replicate any issues.

The next build is near ready. I'd like to investigate this possible bug first.
Nordi's crash on start up and crash due to excessive memory allocation are the only other bugs I've not been able to figure out. I think nordi should mail me his 32-core machine :)

petrw1 2021-01-01 22:18

2021-01-01 Status Update ... Great progress
 
Since last update (November 08)
30 more ranges cleared:
3.1, 3.6, 4.8, 5.1, 5.6, 5.7, 5.8, 6.3, 9.3, 16.6, 20.4, 21.1, 29.1, 30.2, 30.3, 31.6,
32.5, 33.0, 34.6, 34.9, 35.5, 38.0, 38.2, 38.6, 38.9, 39.3, 44.6, 46.3, 47.8, 49.4

And 1 bonus ranges: 58.7.

TOTALS to date:
225 total ranges cleared or 45.27%
3 Ranges with less than 20 to go.
1,486 more factored (27,075)....49.02% total factored.

So confidently expecting every higher range to be cleared by natural GIMPS work I can now report the:
Top-End is 49.6
Bottom-End is: 3.2

Only 8 ranges remaining in the 40Millions.
I will be doing aggressive P-1 on these until about mid 2021.
Then they will require TF to 75; and a few to 76 bits.
Or, with a little coordination it would be fine to TF these to 75 any time.

Only 20 ranges remaining in the 30Millions.
A few of you are helping with P1 and TF there.
More P-1 help would be useful here.

Continuing to get lots of great help. THANKS

Thanks again for everyone contributing.

Prime95 2021-01-02 03:26

30.4 build 4
 
Build 4 is ready. A few bug fixes, a little less memory usage in ECM stage 2 -- maybe that will help nordi. No luck finding a cause for petrw1's errant P-1 run.

Win64: [url]https://www.dropbox.com/s/epf2eqdn00pu5un/p95v304b4.win64.zip?dl=0[/url]
Linux64: [url]https://www.dropbox.com/s/fh4dq51w7fuvseo/p95v304b4.linux64.tar.gz?dl=0[/url]

I hope this will be the last build announced in this thread. Thanks for everyone's help in locating several bugs.

petrw1 2021-01-02 04:35

[QUOTE=Prime95;568018]Build 4 is ready. A few bug fixes, a little less memory usage in ECM stage 2 -- maybe that will help nordi. No luck finding a cause for petrw1's errant P-1 run.

Win64: [url]https://www.dropbox.com/s/epf2eqdn00pu5un/p95v304b4.win64.zip?dl=0[/url]
Linux64: [url]https://www.dropbox.com/s/fh4dq51w7fuvseo/p95v304b4.linux64.tar.gz?dl=0[/url]

I hope this will be the last build announced in this thread. Thanks for everyone's help in locating several bugs.[/QUOTE]

I just installed the previous build.
To upgrade can I only replace prime95.exe or do I need any other files?

Prime95 2021-01-02 05:25

Replace the .exe is fine

petrw1 2021-01-02 06:11

Strange happenings ... but seems OK now
 
1 Attachment(s)
[QUOTE=Prime95;568025]Replace the .exe is fine[/QUOTE]

I upgraded 1 PC from b3 to b4. I replaced all the files.

Before I continue; first Windows and then AVG (my antivirus software) asked if I was sure I trusted it; which I said I did: "Run Anyway".
In my rush I may have double-clicked it a few times to start it and get the window to pop up.

ANYWAY...

Then I spent 5 minutes waiting for it to respond; then waiting 30 minutes for Task Manager to respond to tell me what was going on.
When TM finally responded I saw 3 Prime95 sessions running and 3 more seemingly idling.
After another 10 minutes I was able to Stop/Exit the one I could see.
That left 2 more plus the 3 idlers in Task Manager.
I could force stop the 2 more active; but the other 3 didn't go away.

I rebooted and all is well now.
Even though all 3 sessions were using 20-40% CPU there is no evidence of other work files; could all 3 have been processing the same P-1 without messing each other up?

Tomorrow I'll try to start it again while it is still running and see if I get more than 1 again.

petrw1 2021-01-02 18:17

[QUOTE=petrw1;568028]
Tomorrow I'll try to start it again while it is still running and see if I get more than 1 again.[/QUOTE]

Seems to be okay now

Prime95 2021-01-02 19:04

[QUOTE=petrw1;568091]Seems to be okay now[/QUOTE]

Back when I knew how to write Windows programs (95/98/me/XP), prime95 was written such that running a second prime95 would simply make the first prime95 active in case the program was minimized and then exit. It sounds like that code no longer works.

lycorn 2021-01-02 19:59

@George:
I´ve just installed and tried build 4 on a win64 machine with 4 cores and 16 GB RAM.
The following applies to [B]ECM testing[/B] only. I have not yet tried P-1.

I made 10GB available to Prime95
With build 3, the 4 workers (1 core each) would share the 10 GB among them, using up the available memory, that was set to 10 GB as well.)
Now, I notice that the workers are using a lot less memory, even though tthey were allowed to use more.
An example:
I am ECMing 4 exponents in the 547xxx range. With only 1 worker running stage 2, the program says it will use 4912 MB of memory, but looking at Task Manager, one sees that the used memory starts going up to the expected number (a little more than 5GB, which is reasonable given there are other workers running) but then it goes down to around 3.6 GB. When a second worker enters stage 2, it also indicates it will be using 4912 GB, the figure for mem used climbs to just over 8 GB and then goes down to just under 7GB and stabilizes there. When the third one joins, it will just use 416 MB, the ratio B2/B1 is 117 instead of 154 for the first two workers, and the stage 2 takes less than half the time.
The total memory used is around 7.5 GB, well under the allowed limit.
I don´t know whether or not this is the expected behaviour, but it seems to me the memory is underused, so I thought it was better to let you know, as this might have an impact on the probability of finding factors.

petrw1 2021-01-02 21:54

[QUOTE=Prime95;568096]Back when I knew how to write Windows programs (95/98/me/XP), prime95 was written such that running a second prime95 would simply make the first prime95 active in case the program was minimized and then exit. It sounds like that code no longer works.[/QUOTE]

More likely your code is fine but Windows Alerts/AVG got in the way.

Prime95 2021-01-02 22:09

[QUOTE=lycorn;568099]I don´t know whether or not this is the expected behaviour, but it seems to me the memory is underused[/QUOTE]

This is new in build 4 and expected. It only affects ECM. The explanation is pretty technical.

Stage 2 steps from B1 to B2 in increments of D. Each of these increments requires a very, very expensive modular inverse. However, N modular inverses can be pooled together using 3N multiplications and 1 modular inverse. This pooling requires 2N temporaries (the N multiples of D that will be processed one at a time and N temporaries that can be freed right after the modular inverse). Those N freed temporaries will be needed again if another pooled modular inverse is required which is why prime95 won't let other threads reserve that memory.

Build 3 did the exact same thing. What's different is the gwnum library in build 3 cached freed gwnums to make any future gwallocs fast. In build 4, only a handful of freed gwnums are cached making the remainder available for other threads and programs. This was done in hopes of helping nordi.

I have some ideas for making future prime95's use memory a little better. There are pooling algorithms that use 3.33N multiplies and 1.5N temporaries OR 3.5N multiplies and 1.25N temporaries. I can check for the last pooled modular inverse and make the freed memory available for other threads.

lycorn 2021-01-02 22:37

[QUOTE=Prime95;568104] Those N freed temporaries will be needed again if another pooled modular inverse is required which is why prime95 won't let other threads reserve that memory.
[/QUOTE]

Thanks for the explanation. If I got it right, there are chunks of memory that are actually free, and perceived as such by Task Manager, but that are "set aside" by Prime95 on a just in case basis. Does that explain the difference between the amount of memory allowed to be used and the amount reported by TM (10 GB and 7.5 GB respectively)?

If I remember correctly, nordi´s problem was that the program was using more than its allowed share of memory, and the system would eventually crash under some circumstances. Now the opposite is apparently happening: the program is using less memory than it might do. I personally never had that problem with build 3. The program would use exactly the maximum amount of memory allowed (at least that was the indication in TM).

Prime95 2021-01-02 23:04

[QUOTE=lycorn;568106]Thanks for the explanation. If I got it right, there are chunks of memory that are actually free, and perceived as such by Task Manager, but that are "set aside" by Prime95 on a just in case basis. Does that explain the difference between the amount of memory allowed to be used and the amount reported by TM (10 GB and 7.5 GB respectively)?[/quote]

Yes, the 2.5GB would be needed if each worker in stage 2 ECM needed to do another pooled modular inverse at the same time.

[quote]If I remember correctly, nordi´s problem was that the program was using more than its allowed share of memory, and the system would eventually crash under some circumstances. Now the opposite is apparently happening: the program is using less memory than it might do. I personally never had that problem with build 3. The program would use exactly the maximum amount of memory allowed (at least that was the indication in TM).[/QUOTE]

Yes, I was unable to replicate nordi's case where prime95 was allocating 100GB rather than 50GB maximum nordi had set. This change (I can't call it a fix) should at least prevent the crash he was seeing, because it is unlikely that every ECM stage 2 will be doing a pooled modular inverse at the same time.

petrw1 2021-01-03 01:07

2080Ti GPU slowed about 3% after 30.4 Upgrade
 
Any ideas how/why this might have happened?

Running mfaktc.

masser 2021-01-03 01:15

[QUOTE=petrw1;568123]Any ideas how/why this might have happened?

Running mfaktc.[/QUOTE]

Heat? Could Cpu/Ram/Mobo running hotter now lead to GPU throttling itself a little?

James Heinrich 2021-01-03 01:19

I would suspect more along the lines of either CPU or RAM being more heavily utilized in v30.4 causing a small amount of extra delay in what little off-GPU resources mfaktc needs to use.

petrw1 2021-01-03 02:06

You seem to be both correct
 
GPU-Z had the GPU running a couple degrees hotter than usual (85C).
And some Therm throttling.
I blew out the dust and opened the side door.

It's running 83C now and a little faster but still not quite what it was before 30.4
Maybe 1% down (that's pretty close).

Prime95 2021-01-03 05:51

Woohoo!

I've conquered nordi's memory problem even though I may not fully understand it. I was able to replicate on my 8-core Skylake box. The problem is with ECM on tiny numbers. Prime95 is not properly estimating the space consumed allocating a gwnum. In 30.4b4 the estimate was 640 bytes. Careful examination of the code shows that 680 bytes plus any malloc overhead is required. My testing seems to indicate malloc overhead is much worse in Linux than Windows.

Furthermore, the (up to) 250MB prime pairing bit array was not included in prime95's memory reservation system.

To fix nordi's issue, I've limited prime95 to 100000 temporaries per worker. This should have no impact on performance. Prime95 was allocating over 1M temporaries to reduce modular inverses which are lightning fast on M1277.

In implementing the fix, I noticed a bug in resuming an ECM run in stage 2 when there is less memory available. This could well explain petrw1's missed factor when prime95 did a restart with new memory settings.

PhilF 2021-01-03 14:46

Does GmpEcmHook=1 still work for running stage 2 on GMP-ECM?

Is it still worthwhile to do so on exponents such as M1277?

Prime95 2021-01-03 17:09

[QUOTE=PhilF;568189]Does GmpEcmHook=1 still work for running stage 2 on GMP-ECM?[/quote]

It ought to.

[quote]Is it still worthwhile to do so on exponents such as M1277?[/QUOTE]

Most definitely.

axn 2021-01-03 18:20

[QUOTE=Prime95;568150]In implementing the fix, I noticed a bug in resuming an ECM run in stage 2 when there is less memory available. This could well explain petrw1's missed factor when prime95 did a restart with new memory settings.[/QUOTE]
Does the bug affect P-1 as well? Because that was a P-1.

Prime95 2021-01-03 19:27

[QUOTE=axn;568208]Does the bug affect P-1 as well? Because that was a P-1.[/QUOTE]

Yes. Until the next build is ready, avoid restarting P-1 or ECM in stage 2 with less memory.

petrw1 2021-01-04 01:14

Odd Comm message
 
[CODE][Comm thread Jan 3 15:39] Updating computer information on the server
[Comm thread Jan 3 15:39] Done communicating with server.
[Comm thread Jan 3 18:32] Sending result to server: UID: petrw1/Speck, M41791891 completed P-1, B1=1000000, B2=30000000, Wi4: 38F4559E
[Comm thread Jan 3 18:32]
[Comm thread Jan 3 18:32] PnErrorResult value missing. Full response was:
[Comm thread Jan 3 18:32] [Jan 3 18:32] Visit http://mersenneforum.org for help.
[Comm thread Jan 3 18:32] Will try contacting server again in 70 minutes.[/CODE]

Result does show up in my results at 18:32

Prime95 2021-01-04 02:37

30.4 build 5 released. Please post new comments about version 30.4 (any build) to this thread: [url]https://www.mersenneforum.org/showthread.php?p=568252#post568252[/url]

We know return this thread to its normal function of finding lots of new factors. Thanks again for everyone's help.

firejuggler 2021-01-08 13:02

Taking 27.1M for a spin.
Started 5 days ago,

I got some good result last night
[code].
[Fri Jan 8 04:03:01 2021]
P-1 found a factor in stage #2, B1=1350000, B2=89100000.
UID: firejuggler/Maison, M27152453 has a factor: 214211625581311913643047 (P-1, B1=1350000, B2=89100000)
[Fri Jan 8 05:29:00 2021]
P-1 found a factor in stage #1, B1=1350000.
UID: firejuggler/Maison, M27153151 has a factor: 4694191350737025517407343 (P-1, B1=1350000)
[Fri Jan 8 06:41:42 2021]
P-1 found a factor in stage #1, B1=1350000.
UID: firejuggler/Maison, M27153359 has a factor: 26637694127747536479168769 (P-1, B1=1350000)
[/code]



3 factors found, on the same computer, on the same night.

petrw1 2021-01-08 15:13

[QUOTE=firejuggler;568732]Taking 27.1M for a spin.
Started 5 days ago,

I got some good result last night
[code].
[Fri Jan 8 04:03:01 2021]
P-1 found a factor in stage #2, B1=1350000, B2=89100000.
UID: firejuggler/Maison, M27152453 has a factor: 214211625581311913643047 (P-1, B1=1350000, B2=89100000)
[Fri Jan 8 05:29:00 2021]
P-1 found a factor in stage #1, B1=1350000.
UID: firejuggler/Maison, M27153151 has a factor: 4694191350737025517407343 (P-1, B1=1350000)
[Fri Jan 8 06:41:42 2021]
P-1 found a factor in stage #1, B1=1350000.
UID: firejuggler/Maison, M27153359 has a factor: 26637694127747536479168769 (P-1, B1=1350000)
[/code]



3 factors found, on the same computer, on the same night.[/QUOTE]

:bow:

seatsea 2021-01-12 20:07

Hey all!

Newbie to GIMPS here, been spending probably way to much time reading up and messing around with the tools over the last few days. I ended up finding this thread and started testing things at M16.34
I'm currently running mfakto and going from bit level 69 to 70 on that range, is that appropriate?

In any case I've just found a first factor!

[code]found 1 factor for M16341833 from 2^69 to 2^70 (partially tested) [mfakto 0.15pre7-MGW cl_barrett15_71_gs_2]
tf(): total time spent: 2m 8.875s (1880.09 GHz-days / day)[/code]

petrw1 2021-01-12 23:30

[QUOTE=seatsea;569103]Hey all!

Newbie to GIMPS here, been spending probably way to much time reading up and messing around with the tools over the last few days. I ended up finding this thread and started testing things at M16.34
I'm currently running mfakto and going from bit level 69 to 70 on that range, is that appropriate?

In any case I've just found a first factor!

[code]found 1 factor for M16341833 from 2^69 to 2^70 (partially tested) [mfakto 0.15pre7-MGW cl_barrett15_71_gs_2]
tf(): total time spent: 2m 8.875s (1880.09 GHz-days / day)[/code][/QUOTE]

Yes that's a good range.
Any that are over 1999 unfactored; under 50,000,000 and not assigned to anyone else are the best choices for this project.

Thanks for your help.

petrw1 2021-01-30 01:43

Is there an easy way to look for TF candidates....
 
Not sure if this is the right place to ask where those who know will see it and answer.

Anyway, as most are aware in this sub-project I am factoring to get 100K ranges under 2000 unfactored.

Yes, I know I am already well below the DC wavefront and some see no benefit it this work but alas I proceed.

As I understand it, (correct me if I'm wrong), once a PRP-(xx??) test is done and verified/certified it can indicate if the remaining factor is a Probable-Prime.
I take this to mean it is a waste of time (or a bigger waste of time) to bother looking for factors of these exponents.

Am I making sense?
Anyway, if I am on the right track then I would like to know if there is an easy way to select a list of exponents to factor (TF or P1 or ECM) that are NOT PRP.

Thanks.

firejuggler 2021-01-30 01:56

1 Attachment(s)
the line is
PRPDC=N/A,1,2,10014791,-1,99,0,3,5,"50474546641"
and will most likelly end with a 'not prime'
10M seem to be the wavefront for cofactor checking.
And each check at that range take me around 1H30 min

axn 2021-01-30 02:10

[QUOTE=petrw1;570472]As I understand it, (correct me if I'm wrong), once a PRP-(xx??) test is done and verified/certified it can indicate if the remaining factor is a Probable-Prime.
I take this to mean it is a waste of time (or a bigger waste of time) to bother looking for factors of these exponents.

Am I making sense?
[/QUOTE]

I am not sure I understand you correctly. Could you explain further with an example?

James Heinrich 2021-01-30 02:58

[QUOTE=petrw1;570472]I would like to know if there is an easy way to select a list of exponents to factor (TF or P1 or ECM) that are NOT PRP.[/QUOTE]There are [url=https://www.mersenne.ca/prp.php?show=1]350 exponents[/url] that are fully-factored (where the last factor is either certainly or probably prime).
All the other exponents (almost?) certainly have factors waiting to be discovered.

masser 2021-01-30 03:45

If an exponent does not correspond to a Mersenne prime, then that exponent corresponds to a composite Mersenne number. If it's composite, it has a factor.

When a factor, f, is found for a Mersenne number, Mp = 2^p-1, we can check if the cofactor, (2^p-1)/f is a probable prime. The PRP-cofactor effort is the search for these large PRP cofactors. That search has essentially two stages: 1) Find a factor for a composite Mp and 2) Test the cofactor for probable-primality. If stage 2 has a negative result (not PRP), repeat steps 1) and 2) to find the next factor, cofactor pair.

The under-2000 search is helping the PRP-cofactor effort by finding the first factor for A LOT of known composite Mersenne numbers.

Does that make sense?

petrw1 2021-01-30 05:30

[QUOTE=James Heinrich;570479]There are [url=https://www.mersenne.ca/prp.php?show=1]350 exponents[/url] that are fully-factored (where the last factor is either certainly or probably prime).
All the other exponents (almost?) certainly have factors waiting to be discovered.[/QUOTE]

Hmmm i expected there would be a lot more.
An extremely low success rate.

Thanks.

tha 2021-01-31 20:36

I am continuing my work in the 15M range for a about a week or so and will then move to the 21M range.

I got the 15M range down from 21480 by 160 now doing P-1.

LaurV 2021-02-01 02:33

[QUOTE=petrw1;570472]As I understand it, (correct me if I'm wrong), once a PRP-(xx??) test is done and verified/certified it can indicate if the remaining factor is a Probable-Prime.
I take this to mean it is a waste of time (or a bigger waste of time) to bother looking for factors of these exponents.
Am I making sense?
[/QUOTE]
You do make sense. There are about 400 exponents for which we know that the mersenne cofactor is PRP. For these, it makes no sense to try splitting the cofactor further, it will be a waste of time. There is an infinitesimal chance the cofactor is pseudoprime (i.e. composite, but behaving as a prime for the most tests we can do), and you may be more famous if you can split such pseudoprime than you can be by finding a mersenne prime, but our "gut feelings" tell us that the cofactor is prime. So, you should not waste time with them. You can find a list of such, on James' page, [URL="https://www.mersenne.ca/prp.php"]here[/URL].

masser 2021-02-05 03:52

The 14.0M range: 143 down; 100 to go
 
We've now been factoring the [URL="https://www.mersenne.ca/status/tf/0/0/5/1400"]14.0M [/URL] range for 10 months: 143 factors have been found, with a mix of TF, P-1 and a little bit of ECM. 100 factors to go for the under 2000 goal, so we have passed the halfway point.

It will become harder to find factors, so I'm happy to report that we have gotten some help lately from others on the forum. Many thanks!

VBCurtis, are you still working on the 14.01M subrange? I might restart work there soon, but don't want to step on your toes.

VBCurtis 2021-02-05 04:51

I've paused P-1 for a few weeks to use that core on another project; I plan to start back up mid-month. If you'd like to start at 14.015M, I'll finish 14.010-14.015 for P-1.

I'm still doing ECM on another machine from 14.00M; only at 14001607 today, but I'm doing 10 curves at B1=250k and the server is crediting me with 48 curves per exponent.

masser 2021-02-07 20:46

[QUOTE=VBCurtis;570899]I've paused P-1 for a few weeks to use that core on another project; I plan to start back up mid-month. If you'd like to start at 14.015M, I'll finish 14.010-14.015 for P-1.[/QUOTE]

Sounds good; I'll use one of my slow gpus to TF 14.015-14.02M from 71 to 72 bits. It's probably the most productive work for that gpu now.

[QUOTE=VBCurtis;570899]I'm still doing ECM on another machine from 14.00M; only at 14001607 today, but I'm doing 10 curves at B1=250k and the server is crediting me with 48 curves per exponent.[/QUOTE]

I have "finished" working the 14.05M range, too. Most of the remaining candidates there only have 7 t25 curves completed. 14.09M will be "finished" in about a week, also.

petrw1 2021-02-26 04:43

Only 10 months until Christmas
 
... now that I have your attention :cmd:

Feb 25 Update:
20 more ranges cleared:
3.2, 4.2, 4.3, 4.5, 6.7, 10.2,
22.9, 23.7, 25.2, 26.1, 29.3, 29.9,
30.1, 32.1, 36.2, 36.5, 37.6, 38.8, 41.7, 43.4

TOTALS to date:
245 total ranges cleared or 49.30% (4 more to half way)
3 Ranges with less than 20 to go.
1,694 more factored (28,739)....52.04% total factored.

My current activity/status:
There are only 6 ranges remaining in 4xM.
- 43.0 is being deep TF'd; it should be done in a couple days.
- I have about 1 more month of deep P-1 to do in 42.6, 48.4 and 49.6.
Then unfortunately these 3 ranges will still have close to 50 left to factor.
I'LL NEED DEEP TF GPU HELP HERE.
- Then for 40.1 and 43.3 I plan to do a little more deep P-1 and try to get them closer to 30 remaining.
That said, if anyone wants to TF them before I get there go for it.

This has me moving into the 3xM ranges early April.
There are only 14 ranges remaining there thanks to some huge help while I was chugging away in 5xM and 4xM.
It appears 38.7 will be cleared via TF.
For the last 13 ranges I plan to spend the rest of 2021 doing deep P-1 to get most ranges to under 20 remaining
(a few closer to 30) at which time TF can complete them.

I've got my GPUs starting to TF 2xM to 73 bits.
I COULD USE HELP HERE.
Several other people are also working in the 2xM ranges; some TF and some P-1

2xM will also require a LOT of DEEP P-1; more than just where B1=B2 currently.
But in these lower ranges P-1 is quite fast and efficient.
I COULD USE HELP HERE TOO.

Thanks again for everyone contributing.

petrw1 2021-03-10 16:03

Half done!!!!!
 
That is, half of the ranges are cleared.

I started tracking on 2017/07/24 when there were 498 ranges to go.
As of today 249 have been cleared.
The elapsed time is 1,325 days; 3.6 years.

I fully understand that many of these 249 where "low hanging fruit".
But certainly not all; some were very labor intensive ranges too.

This may not mean we are half done as far as time goes.
But then it depends on how much help we get;
and how much faster algorithms or hardware gets.

TF is more efficient for higher ranges; P-1 or ECM for lower ranges.

5xM is complete
4xM has only 4 ranges to go.
3xM has only 13 ranges to go.
2xM has 85.
1xM has close to 100.
0xM has about 50 (all ranges below 3.4M are done)

:bow wave:

VBCurtis 2021-03-10 16:26

[QUOTE=petrw1;573344]That is, half of the ranges are cleared.

I started tracking on 2017/07/24 when there were 498 ranges to go.
As of today 299 have been cleared.[/QUOTE]

Half of 498 is 249. We're well past half the ranges, unless there's a typo in the part I quoted.

petrw1 2021-03-10 16:39

[QUOTE=VBCurtis;573345]Half of 498 is 249. We're well past half the ranges, unless there's a typo in the part I quoted.[/QUOTE]

Yes, oops 249.

masser 2021-03-10 17:02

[QUOTE=petrw1;573344]
I started tracking on 2017/07/24 when there were 498 ranges to go.
As of today 249 have been cleared.
The elapsed time is 1,325 days; 3.6 years.
[/QUOTE]

:bow:

:chris2be8:

:party:

I think the effort has found over 6000 factors in the past year in the ranges of interest. Have we crossed 30,000 total factors found yet?

petrw1 2021-03-10 17:12

[QUOTE=masser;573350]:bow:

:chris2be8:

:party:

I think the effort has found over 6000 factors in the past year in the ranges of interest. Have we crossed 30,000 total factors found yet?[/QUOTE]

Almost.
On Feb 25 we were at: 28,739
As of today we are just over 29,000.

ATH 2021-03-10 18:52

[QUOTE=petrw1;573352]As of today we are just [SIZE="7"]over [/SIZE][SIZE="1"]2[/SIZE][SIZE="7"]9,000[/SIZE].[/QUOTE]

Wow.

masser 2021-03-10 18:56

1 Attachment(s)
Finally a reference I grok!

firejuggler 2021-03-11 12:37

So far, I have run 364 pm1 test in the 27.1-27.2 M range and I found 18 factors.


I still have about 430 test(B1 run only so far) to run, at about 4H30 by test to finish my 27.1M range. Thats about 80 days.

petrw1 2021-03-11 14:40

[QUOTE=firejuggler;573399]So far, I have run 364 pm1 test in the 27.1-27.2 M range and I found 18 factors.

I still have about 430 test(B1 run only so far) to run, at about 4H30 by test to finish my 27.1M range. That's about 80 days.[/QUOTE]

Good work.
With the bounds you are using 18 factors is about what I'd expect.
When you are done we'll clear that range with TF.

James Heinrich 2021-03-11 15:58

Looking through my P-1 results for the last few years, I've done:[code]
0xM: NF = 14465, F = 2
1xM: NF = 13490, F = 325
2xM: NF = 623, F = 17
3xM: NF = 726, F = 21[/code]

petrw1 2021-03-11 17:16

[QUOTE=James Heinrich;573413]Looking through my P-1 results for the last few years, I've done:[code]
0xM: NF = 14465, F = 2
1xM: NF = 13490, F = 325
2xM: NF = 623, F = 17
3xM: NF = 726, F = 21[/code][/QUOTE]

Other than 0xM which I suspect was because there was SOOO much ECM already done; the rest of your results are reasonable.

Since Fall of 2017 related to this project:
(I use quite aggressive bounds; generally aimed at 3%+ success rate.)
[CODE]
Range F-P1 TOTAL Percent
0xM 62 1583 3.92%
1xM 11 189 5.82%
4xM 1010 33690 3.00%
5xM 247 8272 2.99%
[/CODE]

petrw1 2021-03-20 15:29

I just completed 150,000 assignments in this project.
 
102,624 TF for 1,054 factors
44,281 P1 for 1,344 factors
3,095 ECM for 29 factors
TOTAL: 2,427 factors (for me ... thankfully the rest of you have found many more!!!)

Just over 4.64M GhzDays

[U][B]Where am I going next:[/B][/U]
I am just under 2 weeks from doing all the P-1 I can reasonably do in the 4xM ranges.
48.4 and 49.6 are done
42.6 has an expected completion of April 4th.
Then some DEEP TF would be greatly appreciated.

Then I am moving into P-1 in the 13 remaining 3xM ranges (WOW!!! others almost finished it before I got there!!!).
I should have that done before the end of the year.
I'm going to P-1 39.6 first which will put the upper bound of this project at 35.3.

My GPUs are working at getting 2xM TF'd to 73 bits.

2xM is the "nastiest" range remaining.
TF beyond 74 bits is starting to get "expensive" down this low so they will need more P-1 (or ECM) work.
In the worst ranges (currently >150 ToGo) I calculate we will need to P-1 the majority of the exponents with bounds as high as 1.5M/30M.

Rock On!!!

masser 2021-03-20 18:46

[QUOTE=petrw1;574199]
2xM is the "nastiest" range remaining.
TF beyond 74 bits is starting to get "expensive" down this low so they will need more P-1 (or ECM) work.
In the worst ranges (currently >150 ToGo) I calculate we will need to P-1 the majority of the exponents with bounds as high as 1.5M/30M.

Rock On!!![/QUOTE]


Nice Job!

On 15.5M (>200 ToGo), we may need B1/B2 = 6M/250M. Chomp chomp. :buddy:

petrw1 2021-03-20 19:34

[QUOTE=masser;574229]Nice Job!

On 15.5M (>200 ToGo), we may need B1/B2 = 6M/250M. Chomp chomp. :buddy:[/QUOTE]

Looks pretty good to me: This is how I compute it:

206 to go
TF 69 to 73 should statistically give over 100 factors ... lets hope for 106.
To get 100 P1 factors the remaining 2100 exponents need to yield 1 for every 21 exponents or 4.76% success rate.
To achieve that we need a 4.76% improvement over the P-1 success rate already done.
Because many of the current P-1 done was to higher bounds we need to go that much higher yet. What you suggested is about right.
Those bounds require close to 14 GhzDays per assignment; over 2100 exponents = 29,400!!!!

This might be a range that needs another bit or 2 of TF.

Maybe ECM could help here.
Or maybe someone will get a Quantum computer. :)

axn 2021-03-26 04:11

3.8M done
 
After about 3.5 months of P-1 (covering about 80% of the range), and some supplemental TF, 3.8M is done. This was the second hardest range available in the 3M range after 3.6M. After the current batch of P-1 (to be completed in about 24 hrs), I will be moving on to 3.4M range. Hopefully, at this rate, the entirety of 3m range will be completed before the year end.

petrw1 2021-03-26 04:34

[QUOTE=axn;574561]After about 3.5 months of P-1 (covering about 80% of the range), and some supplemental TF, 3.8M is done. This was the second hardest range available in the 3M range after 3.6M. After the current batch of P-1 (to be completed in about 24 hrs), I will be moving on to 3.4M range. Hopefully, at this rate, the entirety of 3m range will be completed before the year end.[/QUOTE]

Well done.
:chalsall:

masser 2021-03-28 01:57

Annual 14.0M check-in
 
2 Attachment(s)
One year update on 14.0M; about 2/3rds of the way there... May be done by end of 2021. Hopefully within another 12 months.

petrw1 2021-03-28 04:36

[QUOTE=masser;574647]One year update on 14.0M; about 2/3rds of the way there... May be done by end of 2021. Hopefully within another 12 months.[/QUOTE]

:nick::spot::batalov::anonymous:

ATH 2021-03-28 10:58

ETA for the main 20M unfactored project based on numbers of factors found last 30 days and last 365 days:
July 2022 - February 2023

On Oct 1st 2020 the same ETAs were: January 2022 - March 2023

axn 2021-03-28 11:49

[QUOTE=ATH;574665]ETA for the main 20M unfactored project based on numbers of factors found last 30 days and last 365 days:
July 2022 - February 2023

On Oct 1st 2020 the same ETAs were: January 2022 - March 2023[/QUOTE]

This will slow down further. The main source of the factors is TF, the bulk of which is being done by SRBase. They are going thru the range 100M-1B, 1 bit at a time (currently at mid 500M range and 72-73). The factor finding rate will increase till they complete 1B to 73, and then will drop off big time when they start 100M at 73-74.

You will get a much more accurate estimate, if you try to model this TF wave more accurately.

EDIT:-

Some crude modeling:

72-73 will complete in next 14 weeks and find 113k factors
73-74 will complete in another 99 weeks and find 228k factors
74-75 will complete in another 224 weeks and find 237k factors

This should complete the range (with other factors coming in from elsewhere). That is 6.5 years. If SRBase doubles their compute power, this could be done in 3 years.

petrw1 2021-03-28 14:31

[QUOTE=axn;574667]This should complete the range (with other factors coming in from elsewhere). That is 6.5 years. [/QUOTE]

Cool that is pretty close to how long I think it might take for us to complete the low end; under 50M now.

ATH 2021-03-28 15:06

2 Attachment(s)
[QUOTE=axn;574667]This will slow down further. The main source of the factors is TF, the bulk of which is being done by SRBase. They are going thru the range 100M-1B, 1 bit at a time (currently at mid 500M range and 72-73). The factor finding rate will increase till they complete 1B to 73, and then will drop off big time when they start 100M at 73-74.

You will get a much more accurate estimate, if you try to model this TF wave more accurately.[/QUOTE]

Here are some graphs of the numbers factors found which shows the waves you are talking about which started April 2020.

Each point on the graph is the number of factors found during the last 30 days up to that date. The huge spike in 2009 is after primenet v5 was released (in Oct 2008?) and the range 79.3M - 1000M opened up.

petrw1 2021-03-28 15:31

[QUOTE=ATH;574681]Here are some graphs of the numbers factors found which shows the waves you are talking about which started April 2020.

Each point on the graph is the number of factors found during the last 30 days up to that date. The huge spike in 2009 is after primenet v5 was released (in Oct 2008?) and the range 79.3M - 1000M opened up.[/QUOTE]

I believe the mini spikes in 2020 are as srbase moves from 100M to 999M in each bit range.

axn 2021-03-29 01:24

[QUOTE=petrw1;574682]I believe the mini spikes in 2020 are as srbase moves from 100M to 999M in each bit range.[/QUOTE]

Yep. First one is the 70-71 pass, second is 71-72 pass, and the the third one (yet to peak) is the 72-73 pass. These have the classic sawtooth pattern -- slowly rising until dropping suddenly. Each sawtooth will have twice the width of the previous and probably half the height, from now on.

alpertron 2021-03-29 12:14

The speed of srbase depends also on how many users donate GPU time to the trial factoring project.

According to [url]https://srbase.my-firewall.org/sr5/server_status.php[/url] , there are 121 active users, which doubles the number of users when the project was working on the 72-bit range.

axn 2021-03-29 12:50

Yep. It is impossible to model that, but I used the thruput from the last 4 weeks for my projection.

pinhodecarlos 2021-03-29 17:36

Expect more users during the formula boinc (challenge dates yet to be known) plus and hopefully it can get chosen as one of BOINC decathlon discipline.

lycorn 2021-04-02 10:28

I´m back at work in the 16M range, TFing from 69 to 70 bits.
It´s a pity we can´t reserve TF work for these ranges using the Manual GPU Reservation form, in order to reduce the chance of toe stepping, but it seems actually impossible: the server gives a message indicating no assignmens are available...
More precisely I´m currently working the 16.58 and 16.59 ranges (~ 300 exponents). Will keep you posted on the progress.

petrw1 2021-04-04 01:48

I have an offline PC doing P1 on 35.3M. Could be a few weeks before I upload the results.

lycorn 2021-04-05 12:19

Have now taken the whole 16.5x M range (just over 1000 exponents to go). Factors are slowly popping up: 12, so far.

petrw1 2021-04-06 04:24

I have completed aggressive P1 for 4xM ranges ... TF anyone?
 
There are only 3 ranges remaining in the 4xMillions but they are stubborn.

42.6: 47 remaining
48.4: 49
49.6: 49

I have done aggressive P1 on these 3 ranges.
Every exponent is P1'd to at least a 3.5% factor rate; many to a 5.25% factor rate.
I estimate that to continue P1 would require about 500GhzDays per P1 factor.

Therefore, I am suggesting that the preferred next step is TF with a nice GPU farm.
They are currently factored to 74 bits.
To complete these ranges will require full TF to 76 bits; then about half the exponents 77 bits. A total of about 2.5M GhzDays of TF.

Thanks for everyone's help.

firejuggler 2021-04-19 09:42

I am nearly done with the 27.1 range (got still a dozen of expo left). As per petrw1 suggestion i'll take the 23.5 range next.
As for stats, I have done around 586 tests and found 39 factors. Wich is about a 6.65% rate.

Prime95 2021-04-21 00:47

New tool
 
Don't get too excited, but....

Prime95 30.6 has P+1 factoring available. Maybe this will help a tiny bit with some of the more stubborn ranges. See [url]https://www.mersenneforum.org/showpost.php?p=576305&postcount=150[/url]

petrw1 2021-04-21 14:27

[QUOTE=Prime95;576307]Don't get too excited, but....

Prime95 30.6 has P+1 factoring available. Maybe this will help a tiny bit with some of the more stubborn ranges. See [url]https://www.mersenneforum.org/showpost.php?p=576305&postcount=150[/url][/QUOTE]

Wow that didn't take long.

I read the posts last week discussing it and asking for it.

Thanks

chalsall 2021-04-21 14:31

[QUOTE=petrw1;576350]Wow that didn't take long.[/QUOTE]

Indeed. Thanks George!

I don't have many human cycles at the moment, but I do have a few CPU cycles to burn... Could anyone give me an example of a P+1 worktodo line that would be appropriate for the <2K sub-sub-sub project?

Also, how can we tell what's been "worked", so we don't duplicate other's efforts?

petrw1 2021-04-21 14:52

More on P+1
 
George: Your posts talks about choosing B1 based on the current ECM B1.
For my case do you have any recommendations for P+1 for choosing B1 based on how much P-1 has already been done?

I am tempted to assume that P+1 will find different factors than P-1 which makes me hopeful even smallish B1/B2 will have reasonable success? Am I totally out to lunch?

Thanks

axn 2021-04-21 15:32

[QUOTE=chalsall;576352]I don't have many human cycles at the moment, but I do have a few CPU cycles to burn... Could anyone give me an example of a P+1 worktodo line that would be appropriate for the <2K sub-sub-sub project?[/quote]
Ho many assignments and what range?

[QUOTE=chalsall;576352]Also, how can we tell what's been "worked", so we don't duplicate other's efforts?[/QUOTE]
Since this is brand new work type, nothing has been worked as of yet. But all reported results should show up under the exponent's history (in theory). I don't know if this information will be readily available anywhere else.

petrw1 2021-04-21 16:09

[QUOTE=chalsall;576352]Indeed. Thanks George!

I don't have many human cycles at the moment, but I do have a few CPU cycles to burn... Could anyone give me an example of a P+1 worktodo line that would be appropriate for the <2K sub-sub-sub project?
[/QUOTE]

From George's post:

[CODE]P+1 factoring. A worktodo.txt entry looks like this:
Pplus1=k,b,n,c,B1,B2,nth_run[,how_far_factored][,"known_factors"]
Unlike P-1, the fact that factors of Mersenne numbers is 1 mod 2p is of no value.
Thus, P-1 is vastly more effective at finding factors. A P+1 run is about as
valuable as running one ECM curve. P+1 stage 1 is 50% slower than P-1 stage 1
but several times faster than ECM stage 1. P+1 stage 2 is a little faster than
P-1 stage 2 which in turn is a little faster than ECM stage 2.
Unlike P-1, P+1 has only a 50% chance of finding a factor if factor+1 is B1/B2 smooth.
Thus, it makes sense to do 1 or 2 (maybe 3) runs. That is what the nth_run argument is for.
There are two special starting values for P+1 that have a slightly higher chance of
finding a factor. These special starting values correspond to nth_run=1 and nth_run=2.
Like P-1, if how_far_factored is specified, prime95 will ignore B2 and calculate the
best B2 value for the given B1.[/CODE]

and:

[CODE]My thoughts are to choose B1 well above the current B1's being handed out by the server for ECM. For the few exponents I tried in the 4.7M area, ECM is presently being done at B1=50K, I chose P+1 with B1=1M.[/CODE]

chalsall 2021-04-21 16:11

[QUOTE=axn;576370]Ho many assignments and what range?[/QUOTE]

Let's run the experiment in the 31.5M range. If you could give me let's say 300# assignments so we get a reasonable sample set. Please make them reasonably aggressive (read: good probability of a positive, but not stupidly expensive).

[QUOTE=axn;576370]Since this is brand new work type, nothing has been worked as of yet. But all reported results should show up under the exponent's history (in theory). I don't know if this information will be readily available anywhere else.[/QUOTE]

Yeah... I was more wondering about an example of an already completed run, so I know what my spiders need to make friends with.

I guess for this use case, we'll just coordinate here (or another sub-thread).

Thanks for your help getting an initial test batch defined.

petrw1 2021-04-21 16:16

[QUOTE=chalsall;576380]Let's run the experiment in the 31.5M range.[/QUOTE]

At the present time -Anon- is TF'ing 31.5M.
Though he may pause if he reads this thread


All times are UTC. The time now is 12:49.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.