mersenneforum.org Prime95 v30.4/30.5/30.6
 Register FAQ Search Today's Posts Mark Forums Read

2021-03-13, 20:15   #122
Happy5214

"Alexander"
Nov 2008
The Alamo City

3×7×29 Posts

Quote:
 Originally Posted by Viliam Furik I think you would get a credit.
I know. I meant which one. If it weren't for that winter storm that ravaged Texas, I'd have probably reached 27th place on the all-time PRP-CF-DC rankings. If it counted toward that, I might be able to justify the extra 3 or so hours of work on my laptop to run a type-5.

 2021-03-13, 20:56 #123 Viliam Furik   "Viliam Furík" Jul 2018 Martin, Slovakia 24×29 Posts In that case, I am not sure... Both seem logical, for different reasons, but my guess would be DC since it already has some results. I've checked my results from CF type 5 after a newly found factor (there were already 4, I found the 5th). Primenet registered them as double-check because there already were some results with fewer factors. So most probably DC.
 2021-03-18, 10:19 #124 tha     Dec 2002 3×271 Posts I still use 30.4 and found a reproducible way to cause a segmentation fault resulting in a core dump. Don't know if 30.5 fixes this, I can try, but not this week. - Add this line to worktodo.txt: Code: Pminus1=1,2,9221683,-1,1080000,21600000,69 - Start mprime -m - Press ^C whilst in stage 2 well after the init stage. - Choose option 5: quit mprime. - Change the line in worktodo.txt to Code: Pminus1=1,2,9221683,-1,2000000,21600000,69 - Restart mprime -m Result: Code: [Work thread Mar 18 08:45] M9221683 stage 1 complete. 3931780 transforms. Time: 873.316 sec. [Work thread Mar 18 08:45] With trial factoring done to 2^69, optimal B2 is 81*B1 = 162000000. [Work thread Mar 18 08:45] If no prior P-1, chance of a new factor is 8.42% Segmentation fault (core dumped) henk@Z170:~/mersenne\$ ./mprime -m Restarting mprime -m leads to same result at same point in execution. Renaming the file solves the issue: Code: mv m9221683 copy_of_m9221683
2021-03-18, 15:47   #125
tha

Dec 2002

3×271 Posts

Quote:
 Originally Posted by tha I still use 30.4 and found a reproducible way to cause a segmentation fault resulting in a core dump. Don't know if 30.5 fixes this, I can try, but not this week.
Confirm on 30.5

2021-03-18, 21:37   #126
Prime95
P90 years forever!

Aug 2002
Yeehaw, FL

749210 Posts

Quote:
 Originally Posted by tha Confirm on 30.5
Will fix in 30.5 build 2

 2021-03-21, 05:58 #127 Prime95 P90 years forever!     Aug 2002 Yeehaw, FL 22·1,873 Posts 30.5 build 2 available. It fixes the one reported bug.
 2021-03-23, 14:05 #128 kruoli     "Oliver" Sep 2017 Porta Westfalica, DE 7×71 Posts When getting P-1 assignments from PrimeNet, is it expected to not trigger the special B2 selection? Example: M103163903. With Pfactor I'm getting B1=819,000, B2=39,053,000. When manually editing worktodo.txt to the new format of Pminus1 with the B1 from above, I got B2=51*B1=41,769,000. The values are close, but not identical; I have not yet tried other amounts of RAM and would assume that the B2's can differ much more with other settings. It occurred to me, that stage 2 takes more than double the time of stage 1 in my case, given the bounds above. The system is an AMD 3800X, 32 GB of 3,600 MHz RAM, 16 GB allocated to Prime95, version 30.5b2, only one worker using eight cores. Is this also intentional? I always thought optimal P-1 (timewise) was to have equal time spent on both stages. I understand that it is impossible to estimate this in code for all the different hardware that is out there, but would it be possible to have a parameter for manipulating the B2? E.g. Stage2EffortFactor: 1 would be Prime95's full automatic selection, 0.5 would result in a B2 such that stage 2 takes around half the time. Having that value, one could increase B1 and lower B2 (in my personal case) such that the most efficient work is done per time unit. So that value could also influence the optimal B1 and B2 selection. Of course that only makes sense when my assumption is correct that there should be equal time spent on both stages. Last fiddled with by kruoli on 2021-03-23 at 14:24 Reason: Corrected numbers.
2021-03-23, 18:05   #129
Prime95
P90 years forever!

Aug 2002
Yeehaw, FL

11101010001002 Posts

Quote:
 Originally Posted by kruoli When getting P-1 assignments from PrimeNet, is it expected to not trigger the special B2 selection? Example: M103163903. With Pfactor I'm getting B1=819,000, B2=39,053,000. When manually editing worktodo.txt to the new format of Pminus1 with the B1 from above, I got B2=51*B1=41,769,000. The values are close, but not identical; I have not yet tried other amounts of RAM and would assume that the B2's can differ much more with other settings.
Pfactor uses slightly different optimization criteria than "Pminus1=". Pfactor is optimizing for minimizing the total time spent doing P-1, LL, and DC. That is, it is maximizing the LL/DC CPU savings per unit of P-1 work invested. "Pminus1=" is maximizing the number of factors found per unit of P-1 work invested.

Quote:
 It occurred to me, that stage 2 takes more than double the time of stage 1 in my case, given the bounds above. The system is an AMD 3800X, 32 GB of 3,600 MHz RAM, 16 GB allocated to Prime95, version 30.5b2, only one worker using eight cores. Is this also intentional? I always thought optimal P-1 (timewise) was to have equal time spent on both stages. I understand that it is impossible to estimate this in code for all the different hardware that is out there, but would it be possible to have a parameter for manipulating the B2? E.g. Stage2EffortFactor: 1 would be Prime95's full automatic selection, 0.5 would result in a B2 such that stage 2 takes around half the time. Having that value, one could increase B1 and lower B2 (in my personal case) such that the most efficient work is done per time unit. So that value could also influence the optimal B1 and B2 selection. Of course that only makes sense when my assumption is correct that there should be equal time spent on both stages.
The equal time rule was a general rule of thumb that may have worked back in the day. Prime95 carefully counts every stage 1 and stage 2 transform in making it's decisions.

2021-04-02, 06:43   #130
Falkentyne

Mar 2011

100002 Posts

Quote:
 Originally Posted by Prime95 30.5 build 2 available. It fixes the one reported bug.
Thank you for all the support and updating of Prime95! I think it's been 20 years now since I first used it...was it on a Pentium 3 Coppermine or something? My god ...

 2021-04-05, 14:28 #131 axn     Jun 2003 4,969 Posts George, Would it be possible to reduce the size of P-1 stage 2 checkpoint files? Using this with colab / google drive, it takes a very long time to stop/restart during stage 2 - it writes 100-200 MB of save file. I am guessing it is somehow saving the stage 2 prime bitmap or something? I think it would be faster to recompute the state, rather than load it from disk (with google drive).
2021-04-05, 14:32   #132
James Heinrich

"James Heinrich"
May 2004
ex-Northern Ontario

2·412 Posts

Quote:
 Originally Posted by axn I think it would be faster to recompute the state, rather than load it from disk (with google drive).
Conversely on a local system it works quickly. Any such changes should probably be an optional codepath for special use-cases and not affect the majority of users who don't read/write from Drive.