Thread: P-1 memory
View Single Post
Old 2018-05-14, 12:11   #2
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

4,729 Posts
Default

Quote:
Originally Posted by ET_ View Post
I am testing an Imac with 32 GB of RAM, a 4-cores IvyBridge and Windows 10 installed.

I gave 8GB of RAM to mprime to run P-1 work on M87,656,xxx: mprime takes 8GB of RAM and starts stage 2 with E=6 (I suppose it's a selection of Brent-Sujama extensions). Stage 2 runs in one single pass of 192 relative primes.

Giving more RAM (say 12/24GB) to P-1 would end up in:
a) A quicker stage 2?
b) A slower stage 2 with a better chance to find factors?

I suppose b) is the correct answer, but to be sure I'm asking here.

I also assume the same for ECM (especially for fat Fermat numbers).
My guess is generally a). Lots of memory allows stage 2 to be completed in fewer passes by increasing number of relative primes per pass. I'm used to seeing CUDAPm1 run with a total of 480 or 960 relative primes. There, diminishing returns set in between 4GB and 8GB for similar exponents. Not sure why.

But if you're already running stage 2 in a single pass with 192 relative primes, as stated, it would seem the answer is c) no difference. Not to worry, the additional memory will be useful at higher exponents. Or you could run multiple P-1 instances.

Be wary of changing memory allowance midstream. That could cause d) the total run time is longer because it prompts the computation to start over. From the prime95 whatsnew.txt file: "P-1 will restart any time the memory settings change. This is done so that the optimal P-1 bounds can be computed with the new memory settings."
Attached Thumbnails
Click image for larger version

Name:	gpu ram and nrp.png
Views:	105
Size:	10.6 KB
ID:	18262  
kriesel is online now   Reply With Quote