mersenneforum.org Large range of exponents with small P-1 bounds
 Register FAQ Search Today's Posts Mark Forums Read

 2021-07-13, 01:19 #1 JuanTutors     "Juan Tutors" Mar 2004 22F16 Posts Large range of exponents with small P-1 bounds I've been working on a Mersenne in the 100M digit range which has had P-1 done with very small bounds, specifically B1=100000, B2=1000000. I got curious and found that there are actually a lot of exponents in this range that have had P-1 done to these small bounds. https://www.mersenne.org/report_exponent/?exp_lo=332370583&exp_hi=332375333&full=1 In these kinds of cases, should P95 be programmed to re-do P-1 when large amounts of ram are reserved?
2021-07-13, 03:20   #2
Prime95
P90 years forever!

Aug 2002
Yeehaw, FL

1E0816 Posts

Quote:
 Originally Posted by JuanTutors In these kinds of cases, should P95 be programmed to re-do P-1 when large amounts of ram are reserved?
Yes.

2021-07-13, 03:28   #3
JuanTutors

"Juan Tutors"
Mar 2004

10578 Posts

Quote:
 Originally Posted by Prime95 Yes.
Does that mean that that feature is in the pipeline? Note the link I included. There are a bunch of examples in that range.

 2021-07-13, 05:48 #4 Prime95 P90 years forever!     Aug 2002 Yeehaw, FL 1E0816 Posts A little server work is required. P-1 bounds of 200K or more are now required to have prime95 skip P-1.
 2021-07-19, 08:12 #5 drkirkby   "David Kirkby" Jan 2021 Althorne, Essex, UK 44810 Posts I don't know if it's considered impolite to approach a user and see if they want to do a specific task. But user Tha has done 24219 GHz days this year, with 89% of that on P-1 factoring, so it might be a reasonable assumption that P-1 factoring is their preferred work type. Related to this, are there laws of diminishing returns in allocating a lot of RAM to P-1 factoring? I'm using 4-workers for maximum throughput, and can give P-1 factoring 360 GB RAM. These leaves a few possibilities Let each of the 4 workers use a maximum of 360/4=90 GB RAM, which ensures no worker will ever stall waiting for RAM, but will always slow the P-1 factoring, as it has less RAM that it would like. Don't limit the RAM at all, which means the P-1 factoring will take place as quickly as possible, but other workers could be stalled waiting for RAM. I'm currently limiting each worker to half the RAM, so allowing two workers to do P-1 factoring at the same time, on the assumption that the probability of more than 2 workers doing P-1 factoring at the same time is remote. I'm currently finding that exponents around 105 million are usually trial factored to 276, and P-1 factoring will use a maximum of about 300 GB RAM. But some exponents, even at 105 million exponents, have been factored more than 276 by people with GPUs, so are likely to have P-1 factoring done to larger bounds, and so even one P-1 factoring task could probably use all my RAM. What would seem the best RAM allocation strategy in my circumstances? Last fiddled with by drkirkby on 2021-07-19 at 08:21
2021-07-19, 08:20   #6
axn

Jun 2003

22·3·433 Posts

Quote:
 Originally Posted by drkirkby Related to this, are there laws of diminishing returns in allocating a lot of RAM to P-1 factoring?
Yes, there is.

It would be good if you can prevent more than one worker from entering stage 2 simultaneously, but if it is too much hassle, dividing the memory evenly between workers will work fine. You're probably in the top 0.1 percentile in terms of RAM allocation. Typical dedicated P-1'er might give 8-16 GB RAM. Compared to that, even 90GB is ginormous.

Last fiddled with by axn on 2021-07-19 at 08:21

2021-07-19, 12:30   #7
drkirkby

"David Kirkby"
Jan 2021
Althorne, Essex, UK

7008 Posts

Quote:
 Originally Posted by drkirkby Related to this, are there laws of diminishing returns in allocating a lot of RAM to P-1 factoring?
Quote:
 Originally Posted by axn Yes, there is. It would be good if you can prevent more than one worker from entering stage 2 simultaneously, but if it is too much hassle, dividing the memory evenly between workers will work fine. You're probably in the top 0.1 percentile in terms of RAM allocation. Typical dedicated P-1'er might give 8-16 GB RAM. Compared to that, even 90GB is ginormous.
Thank you. I rather suspected there would be a law of diminishing returns.

I think I will be able to prevent more than one worker from being in stage 2 of P-1 factoring at the same time, but are not 100% sure yet. I managed it okay when running two workers, but it might be a bit more tricky with 4 workers. I think it will be possible though.

Currently at least, I'm only finding I need to do P-1 factoring on about 10% of the category 0 or 1 exponents, and given the factoring takes only about 5% of the time of the main PRP test, the chances of two randomly executing tests being in the 2nd stage of P-1 at the same time are slim (around 0.5%), and with a bit of care, that probability can be reduced further. (On slower machines, where I get category 4 exponents, the computer has to do P-1 factoring almost every time, but only one exponent gets tests at a time, so its not an issue.)

Last fiddled with by drkirkby on 2021-07-19 at 13:29

 2021-07-19, 12:50 #8 JuanTutors     "Juan Tutors" Mar 2004 13×43 Posts Isn't P95 programmed to give each worker half the ram in cases where they are both running stage 2? Couldn't you just let it run and let it adjust for you?
2021-07-19, 13:10   #9
axn

Jun 2003

22×3×433 Posts

Quote:
 Originally Posted by JuanTutors Isn't P95 programmed to give each worker half the ram in cases where they are both running stage 2? Couldn't you just let it run and let it adjust for you?
It is. However, there is a catch. When one worker starts stage 2, it will grab all available memory. When another one enters stage 2, the first worker will stop & restart with reduced memory so that the second worker can proceed. That stop&restart alone will wipeout any potential gains you get from increased memory. You're better off explicitly allocating half the memory to both the workers in the first place.

2021-07-19, 15:04   #10
chalsall
If I May

"Chris Halsall"
Sep 2002

26·157 Posts

Quote:
 Originally Posted by Prime95 A little server work is required. P-1 bounds of 200K or more are now required to have prime95 skip P-1.
I'm not quite sure why this is in the GPU to 72 sub-forum, but...

George, does this explain why there are several candidates listed between 94M and 103M that need a P-1, but don't appear to being assigned?

I've been doing some cleanup the last year or so of candidates that had an FC run without a P-1 job done. But, I can't figure out the criteria for these candidates.

Any hints as to how I can determine what Primenet is using for that column? Thanks.

2021-07-19, 16:07   #11
ixfd64
Bemusing Prompter

"Danny"
Dec 2002
California

23×3×101 Posts

Quote:
 Originally Posted by chalsall I'm not quite sure why this is in the GPU to 72 sub-forum, but...
[mod note] I've moved it to the PrimeNet forum.

 Similar Threads Thread Thread Starter Forum Replies Last Post markr PrimeNet 18 2009-08-23 17:23 Zeta-Flux Factoring 96 2007-05-14 16:59 RichTJ99 Hardware 2 2006-02-08 23:38 hbock Lone Mersenne Hunters 1 2004-03-07 19:51 dsouza123 Programming 1 2003-10-09 16:04

All times are UTC. The time now is 16:26.

Sun Dec 5 16:26:09 UTC 2021 up 135 days, 10:55, 1 user, load averages: 2.64, 2.52, 2.37

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.