![]() |
![]() |
#1 |
"Juan Tutors"
Mar 2004
569 Posts |
![]()
I've been working on a Mersenne in the 100M digit range which has had P-1 done with very small bounds, specifically B1=100000, B2=1000000. I got curious and found that there are actually a lot of exponents in this range that have had P-1 done to these small bounds.
https://www.mersenne.org/report_exponent/?exp_lo=332370583&exp_hi=332375333&full=1 In these kinds of cases, should P95 be programmed to re-do P-1 when large amounts of ram are reserved? |
![]() |
![]() |
![]() |
#2 |
P90 years forever!
Aug 2002
Yeehaw, FL
22·5·397 Posts |
![]() |
![]() |
![]() |
![]() |
#3 |
"Juan Tutors"
Mar 2004
569 Posts |
![]() |
![]() |
![]() |
![]() |
#4 |
P90 years forever!
Aug 2002
Yeehaw, FL
22·5·397 Posts |
![]()
A little server work is required. P-1 bounds of 200K or more are now required to have prime95 skip P-1.
|
![]() |
![]() |
![]() |
#5 |
"David Kirkby"
Jan 2021
Althorne, Essex, UK
7018 Posts |
![]()
I don't know if it's considered impolite to approach a user and see if they want to do a specific task. But user Tha has done 24219 GHz days this year, with 89% of that on P-1 factoring, so it might be a reasonable assumption that P-1 factoring is their preferred work type.
Related to this, are there laws of diminishing returns in allocating a lot of RAM to P-1 factoring? I'm using 4-workers for maximum throughput, and can give P-1 factoring 360 GB RAM. These leaves a few possibilities
I'm currently finding that exponents around 105 million are usually trial factored to 276, and P-1 factoring will use a maximum of about 300 GB RAM. But some exponents, even at 105 million exponents, have been factored more than 276 by people with GPUs, so are likely to have P-1 factoring done to larger bounds, and so even one P-1 factoring task could probably use all my RAM. What would seem the best RAM allocation strategy in my circumstances? Last fiddled with by drkirkby on 2021-07-19 at 08:21 |
![]() |
![]() |
![]() |
#6 | |
Jun 2003
124128 Posts |
![]() Quote:
It would be good if you can prevent more than one worker from entering stage 2 simultaneously, but if it is too much hassle, dividing the memory evenly between workers will work fine. You're probably in the top 0.1 percentile in terms of RAM allocation. Typical dedicated P-1'er might give 8-16 GB RAM. Compared to that, even 90GB is ginormous. Last fiddled with by axn on 2021-07-19 at 08:21 |
|
![]() |
![]() |
![]() |
#7 | ||
"David Kirkby"
Jan 2021
Althorne, Essex, UK
449 Posts |
![]() Quote:
Quote:
I think I will be able to prevent more than one worker from being in stage 2 of P-1 factoring at the same time, but are not 100% sure yet. I managed it okay when running two workers, but it might be a bit more tricky with 4 workers. I think it will be possible though. Currently at least, I'm only finding I need to do P-1 factoring on about 10% of the category 0 or 1 exponents, and given the factoring takes only about 5% of the time of the main PRP test, the chances of two randomly executing tests being in the 2nd stage of P-1 at the same time are slim (around 0.5%), and with a bit of care, that probability can be reduced further. (On slower machines, where I get category 4 exponents, the computer has to do P-1 factoring almost every time, but only one exponent gets tests at a time, so its not an issue.) Last fiddled with by drkirkby on 2021-07-19 at 13:29 |
||
![]() |
![]() |
![]() |
#8 |
"Juan Tutors"
Mar 2004
56910 Posts |
![]()
Isn't P95 programmed to give each worker half the ram in cases where they are both running stage 2? Couldn't you just let it run and let it adjust for you?
|
![]() |
![]() |
![]() |
#9 |
Jun 2003
2×2,693 Posts |
![]()
It is. However, there is a catch. When one worker starts stage 2, it will grab all available memory. When another one enters stage 2, the first worker will stop & restart with reduced memory so that the second worker can proceed. That stop&restart alone will wipeout any potential gains you get from increased memory. You're better off explicitly allocating half the memory to both the workers in the first place.
|
![]() |
![]() |
![]() |
#10 | |
If I May
"Chris Halsall"
Sep 2002
Barbados
294F16 Posts |
![]() Quote:
George, does this explain why there are several candidates listed between 94M and 103M that need a P-1, but don't appear to being assigned? I've been doing some cleanup the last year or so of candidates that had an FC run without a P-1 job done. But, I can't figure out the criteria for these candidates. Any hints as to how I can determine what Primenet is using for that column? Thanks. |
|
![]() |
![]() |
![]() |
#11 |
Bemusing Prompter
"Danny"
Dec 2002
California
23×3×103 Posts |
![]() |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
P-1 on small exponents | markr | PrimeNet | 18 | 2009-08-23 17:23 |
Large small factor | Zeta-Flux | Factoring | 96 | 2007-05-14 16:59 |
Problems with Large FFT but not Small FFT's? | RichTJ99 | Hardware | 2 | 2006-02-08 23:38 |
Small range with high density of factors | hbock | Lone Mersenne Hunters | 1 | 2004-03-07 19:51 |
Small win32 program, range of time to do a TF | dsouza123 | Programming | 1 | 2003-10-09 16:04 |