![]() |
|
|
#122 |
|
"Alexander"
Nov 2008
The Alamo City
22·52·7 Posts |
I know. I meant which one. If it weren't for that winter storm that ravaged Texas, I'd have probably reached 27th place on the all-time PRP-CF-DC rankings. If it counted toward that, I might be able to justify the extra 3 or so hours of work on my laptop to run a type-5.
|
|
|
|
|
|
#123 |
|
"Viliam FurÃk"
Jul 2018
Martin, Slovakia
61510 Posts |
In that case, I am not sure... Both seem logical, for different reasons, but my guess would be DC since it already has some results.
I've checked my results from CF type 5 after a newly found factor (there were already 4, I found the 5th). Primenet registered them as double-check because there already were some results with fewer factors. So most probably DC. |
|
|
|
|
|
#124 |
|
Dec 2002
11001011112 Posts |
I still use 30.4 and found a reproducible way to cause a segmentation fault resulting in a core dump. Don't know if 30.5 fixes this, I can try, but not this week.
- Add this line to worktodo.txt: Code:
Pminus1=1,2,9221683,-1,1080000,21600000,69 - Press ^C whilst in stage 2 well after the init stage. - Choose option 5: quit mprime. - Change the line in worktodo.txt to Code:
Pminus1=1,2,9221683,-1,2000000,21600000,69 Result: Code:
[Work thread Mar 18 08:45] M9221683 stage 1 complete. 3931780 transforms. Time: 873.316 sec. [Work thread Mar 18 08:45] With trial factoring done to 2^69, optimal B2 is 81*B1 = 162000000. [Work thread Mar 18 08:45] If no prior P-1, chance of a new factor is 8.42% Segmentation fault (core dumped) henk@Z170:~/mersenne$ ./mprime -m Renaming the file solves the issue: Code:
mv m9221683 copy_of_m9221683 |
|
|
|
|
|
#125 |
|
Dec 2002
5·163 Posts |
|
|
|
|
|
|
#126 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
5·11·137 Posts |
|
|
|
|
|
|
#127 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
5×11×137 Posts |
30.5 build 2 available. It fixes the one reported bug.
|
|
|
|
|
|
#128 |
|
"Oliver"
Sep 2017
Porta Westfalica, DE
21816 Posts |
When getting P-1 assignments from PrimeNet, is it expected to not trigger the special B2 selection? Example: M103163903. With Pfactor I'm getting B1=819,000, B2=39,053,000. When manually editing worktodo.txt to the new format of Pminus1 with the B1 from above, I got B2=51*B1=41,769,000. The values are close, but not identical; I have not yet tried other amounts of RAM and would assume that the B2's can differ much more with other settings.
It occurred to me, that stage 2 takes more than double the time of stage 1 in my case, given the bounds above. The system is an AMD 3800X, 32 GB of 3,600 MHz RAM, 16 GB allocated to Prime95, version 30.5b2, only one worker using eight cores. Is this also intentional? I always thought optimal P-1 (timewise) was to have equal time spent on both stages. I understand that it is impossible to estimate this in code for all the different hardware that is out there, but would it be possible to have a parameter for manipulating the B2? E.g. Stage2EffortFactor: 1 would be Prime95's full automatic selection, 0.5 would result in a B2 such that stage 2 takes around half the time. Having that value, one could increase B1 and lower B2 (in my personal case) such that the most efficient work is done per time unit. So that value could also influence the optimal B1 and B2 selection. Of course that only makes sense when my assumption is correct that there should be equal time spent on both stages. Last fiddled with by kruoli on 2021-03-23 at 14:24 Reason: Corrected numbers. |
|
|
|
|
|
#129 | ||
|
P90 years forever!
Aug 2002
Yeehaw, FL
5·11·137 Posts |
Quote:
Quote:
|
||
|
|
|
|
|
#130 |
|
Mar 2011
24 Posts |
|
|
|
|
|
|
#131 |
|
Jun 2003
2×3×7×112 Posts |
George, Would it be possible to reduce the size of P-1 stage 2 checkpoint files? Using this with colab / google drive, it takes a very long time to stop/restart during stage 2 - it writes 100-200 MB of save file. I am guessing it is somehow saving the stage 2 prime bitmap or something? I think it would be faster to recompute the state, rather than load it from disk (with google drive).
|
|
|
|
|
|
#132 |
|
"James Heinrich"
May 2004
ex-Northern Ontario
D5D16 Posts |
Conversely on a local system it works quickly. Any such changes should probably be an optional codepath for special use-cases and not affect the majority of users who don't read/write from Drive.
|
|
|
|