20220109, 01:50  #12  
P90 years forever!
Aug 2002
Yeehaw, FL
1111101010110_{2} Posts 
Quote:
Longer term, P+1 and to a lesser degree ECM should see huge B2 increases too. 

20220109, 02:48  #13  
Jun 2003
5403_{10} Posts 
Quote:
Quote:
1. Mathematically, the optimal thing to do is to let the program calculate the optimal B2 (for the given B1 & RAM). 2. Therefore forcing lowRAM machines to do same B2 as highRAM machines is counter productive. 3. Given #1 & #2, we should force lowRAM machines to do higherthannormal B1, and let program calculate optimal B2. TL;DR  Recommend B1 based on exponent/FFT and RAM. Let program calculate B2  don't give any direct recommendation for it. 

20220109, 03:52  #14  
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
3^{2}×7×83 Posts 
Sorry/Please. This is still a discussion point ...
DISCLAIMER:
Per the title we are still very early in the speculating, planning phases of a potential followup project. Nothing that follows is to be taken as suggested or recommended. But opinions, thoughts, etc. are more than welcome. My focus still is completing the original Under2K project; I estimate it will take about half of 2022. Quote:
It can do a low 20M P1 with 700K/164M in about 70 minutes. ...so maybe 100 minutes for 1M/234M. and a 3 year old 8core 7820x with 32GB; 24.5GB for P1. It can do a low 20M P1 with 1M/328M in about 24 minutes. Using the "Double Exponent B1 drops by a factor of 2.2" gives a table like this with 2 different starting values for 20M Code:
Exponent B1 B1 7813 256,000,000 1,097,517,471 15625 249,435,789 498,871,578 31250 113,379,904 226,759,808 62500 51,536,320 103,072,640 125000 23,425,600 46,851,200 250000 10,648,000 21,296,000 500000 4,840,000 9,680,000 1000000 2,200,000 4,400,000 2000000 1,000,000 2,000,000 If we choose a "benchmark / standard" PC we need to know what B2multiplier it will choose based on the exponent. For example my 7820X (24.5 GB) has these values: I chose a B1 value a little higher than the current B1. I chose TFbits of 74 for all exponents; not the actual level...just in case it affects the B2 calculation. This table shows B2 kindof doubles as the exponents halves.... The first Multiplier is the one actually used; the second was initially chosen as in the following excerpt Code:
[Work thread Jan 8 19:14] Inversion of stage 1 result complete. 5 transforms 1 modular inverse. Time: 0.071 sec. [Work thread Jan 8 19:14] With trial factoring done to 2^74 optimal B2 is 13030*B1 = 84695000000. [Work thread Jan 8 19:14] If no prior P1 chance of a new factor is 10.3% [Work thread Jan 8 19:14] Switching to AVX512 FFT length 40K Pass1=128 Pass2=320 clm=1 8 threads ... [Work thread Jan 8 19:14] With trial factoring done to 2^74 optimal B2 is 9585*B1 = 62302500000. [Work thread Jan 8 19:14] If no prior P1 chance of a new factor is 9.99% [Work thread Jan 8 19:14] Using 24837MB of memory. D: 150150, 14400x63835 polynomial multiplication. Code:
Exponent B1 B2 Multipliers 625057 6500000 62306500000 9586 13030 1256201 1500000 5449500000 3633 6065 2502391 1500000 3001500000 2001 3044 5001049 3700000 5561100000 1503 1660 10022263 1500000 1081500000 721 851 20852933 800000 260668980 326 366 Hmm, this is a lot of Blah Blah Blah ... but I can for example use the tables above to suggest: If you have at least 16(?) GB of RAM available for P1; let Prime95 choose B2 Pminus1=N/A,1,2,10022263,1,2200000,0,73 If you have less RAM I would suggest: ... but accept your iron/your choice. (This is 500x ... a little less than the chart above ... but the PC used for that chart had 24.5GB RAM for P1) Pminus1=N/A,1,2,10022263,1,2200000,1100000000 'Nuff said for now. Wayne 

20220109, 04:03  #15  
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
3^{2}·7·83 Posts 
Quote:
Maybe I misunderstand but I don't think we want to process a given exponent more than once. GhzD is certainly an option to consider along with time. Though I have noticed that 30.8 gives very high GhzD for very low exponents and very large B2; that suggests that if we randomly chose 25 GhzD that would allow for decent bounds for a larger exponent but not large enough bounds for the very small. Haven't I read somewhere on this forum that the infamous RDS showed that P1 is always(?) more efficient at finding factoers per GhzD than ECM??? Thanks for your thoughts 

20220109, 12:09  #16 
"Vincent"
Apr 2010
Over the rainbow
5×571 Posts 
some data:
one a i7 8700, using 9 G of ram 3 core Code:
Sending result to server: UID: firejuggler/Maison, M8524427 completed P1, B1=1000000, B2=333779160, Wi4: CF5F1AA7, AID: 955.... PrimeNet success code with additional info: CPU credit is 8.0907 GHzdays. Code:
PrimeNet success code with additional info: CPU credit is 9.9472 GHzdays. Sending result to server: UID: firejuggler/Maison, M8561477 completed P1, B1=1200000, B2=410879040, Wi4: C5F098E4, AID: DA3... Code:
PrimeNet success code with additional info: CPU credit is 13.3236 GHzdays. Sending result to server: UID: firejuggler/Maison, M8577563 completed P1, B1=1560000, B2=551166330, Wi4: FD07A4FD, AID: 2A9ABA... 
20220109, 12:17  #17 
"GIMFS"
Sep 2002
Oeiras, Portugal
3030_{8} Posts 
The third trial took nearly twice as long as the first one, for an increase in the probability of success from 6,8% to 7,7%. Not really exciting...

20220109, 12:22  #18 
"Jacob"
Sep 2006
Brussels, Belgium
2·3·5·61 Posts 
If a reprogramming of the GHzDays is done, it could be the time to add another correction : deduct the credit earned by previous P1 attempts from the credit given (it would prevent abuse of the credit system for what it is worth ;)

20220109, 13:05  #19  
"Vincent"
Apr 2010
Over the rainbow
5×571 Posts 
Quote:
Maybe, we need something like a chart, depending on the amount of ECM already done? For the very low exponent, the payout in GhzD for a relativelly short run is frightening : I got 86k GhZ day for a 10H job (and a 0.2xxxx% chance to find a factor due to ECM, wich in retrospect wasn't needed at all). 

20220109, 17:32  #20  
Dec 2002
2^{2}·3·71 Posts 
Quote:
So, maybe we should view this as in how much time (in years) do we want in between a P1 run on an exponent and the next run on it with better hardware (or new software based on different math). 

20220109, 18:27  #21  
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
3^{2}·7·83 Posts 
Quote:


20220109, 22:34  #22 
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
1010001101101_{2} Posts 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
How to optimize the sieving stage of QS?  Ilya Gazman  Factoring  6  20200826 22:03 
Placeholder: When is it legal to torrent BBC tv stuff?  kladner  Lounge  3  20181001 20:32 
Future project direction and server needs synopsis  gd_barnes  No Prime Left Behind  6  20080229 01:09 
Unreserving exponents(these exponents haven't been done)  jasong  Marin's Mersennearies  7  20061222 21:59 
A distributedcomputing project to optimize GIMPS FFT? Genetic algorithms  GP2  Software  10  20031209 20:41 