20210722, 02:27  #12 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
13452_{8} Posts 
When standalone P1 is performed, how is the server to predict whether the first primality test that will be assigned later and performed later will be LL, PRP without proof or with bad proof, or PRP with a good proof that verifies as correct?

20210722, 02:41  #13 
Romulan Interpreter
"name field"
Jun 2011
Thailand
23120_{8} Posts 
My two cents: the value should stay 2. Little bit more P1 won't hurt anybody, and it may be beneficial on long term.

20210722, 04:27  #14 
"University student"
May 2021
Beijing, China
127 Posts 
If one is interested in P1 factors, he or she of course could use 2primaritytestsaved bounds. However, some people just want to test as much exponents as possible, using PRP with proof, the 1testsaved bounds are OK. Of course, we could go into the middle, using 1.2testsaved bounds, since there are some PRP tests with bad proof or stalling out.
Personally I suggest doing more TF work at current PRP wavefront. By adding the throughput of top 500 producers, we have done 64 million GHZDays on PRP tests in the last year, but over 147 million GHZDays on TF (over twice as much work!) . For this reason, we could TF a bit higher, say 2^77 (even 2^78), just like what we have done to 95M exponents. 
20210722, 04:51  #15 
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
23614_{8} Posts 
Quite a bit of the TF work is being done away from the area of FTC's. SRBase is moving through exponents bit level by bit level and not staying below 120M. Also user TJAOI is doing a lot of work on low exponents (well below the FTC rang and at lower bit levels). So these 2 should not be counted toward the TF total. Also, with PRP and certs there is less work being done on to test and confirm exponents. This changes the calculus of what makes sense WRT to TF vs primality testing. Those running GPU72 closely watch the front of the Cat 4 FTC wave front, the Cat 3 FTC wavefront, and the currently available TF firepower for working ahead of the FTC's and what sort of work the various users prefer.
Last fiddled with by Uncwilly on 20210722 at 04:52 
20210722, 05:26  #16 
Romulan Interpreter
"name field"
Jun 2011
Thailand
2^{4}×613 Posts 
Also... we are comparing apples with watermelons, the TF credit unit and PRP credit unit are a lot different. One good GPU can spit 30006000 GHzDays for every day it does TF, but only 300800 GHzDays for every day it does LL/PRP/P1. This is remnant from the time when CPUs were used to TF, and the credit values were calculated to be approximately equal per time unit spent by the CPU for each work type. GPUs joining the fight completely changed the equation: now you can get 5 to 10 times more "credit" if you use your GPU for TF than for PRP, and there are people still motivated by that, especially young gamers whose gaming cards are not good at FP64 flops (needed for PRP) but are excellent at FP32 flops (good for gaming and TF). Unfortunately (or more exactly, fortunately) this was never fixed, because the rebalancing is not easy, and it will upset some people. On the other hand, giving a lot more TF credit per unit of time may be beneficial because that's the ONLY incentive given for TF. Some people with gaming GPUs (which are anyhow better at TF, and worse at LL/PRP) will join and do TF to advance in tops fast  two average gaming cards can put you on the tops in few weeks  therefore helping the project, which is always in need of "more TF". The TF does not have other incentive (unlike PRP, where you can find a prime and take some money) beside altruism ("we want to help the project"), idiocy ("we want to find factors, or to get a lot of credits, albeit we know none of the both are of any use"), or entertainment ("yeah, it is fun! hihi" and make donkey face). So, let TF give more credit, that's ok. I personally will jump to grab some of it!
Last fiddled with by LaurV on 20210722 at 05:30 
20210722, 05:39  #17 
"University student"
May 2021
Beijing, China
127 Posts 
Sometimes it's 30 times more, depending on GPU model. My GPU (GTX1650) earn approximately 900 GHZDays per day doing TF but only less than 30 GHZDays doing PRP. As a result, I factor every exponent I test to at least 2^77, sometimes to 2^79.

20210722, 06:39  #18  
Mar 2014
34_{16} Posts 
Quote:
Perhaps 2 is reasonable if a person requests a standalone P1 assignment. It seems less reasonable when one receives a PRP assignment for a number that has not yet had P1 done on it. (I have been getting quite a few of these, for the past year or so.) And is not the intention for all future worldrecordsized testing to be PRP, not LL, no? So the expected number of tests saved is something like 1.03? If extra factors are found, great  it just seems the default ought to be to minimize time needed to resolve each exponent. Last fiddled with by Siegmund on 20210722 at 06:40 

20210722, 06:47  #19 
Romulan Interpreter
"name field"
Jun 2011
Thailand
2^{4}×613 Posts 
Yep, that's exactly what I mean. I do the same. But at the end, what should drive you (general you, not personal) should be the speed you eliminate exponent candidates. If you can run 150 TF assignments and find two factors per day, but it will take you more than half day to run a PRP test in the same hardware, and I mean, at the front level, not picking low hanging, large expos, low bitlevel TF assignments, than your hardware should do TF. You help the project more that way.
Last fiddled with by LaurV on 20210722 at 06:52 
20210722, 06:51  #20  
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
2×5×593 Posts 
Quote:
Radeon VII mfakto v0.15pre6 was ~1300GHzD/day on Windows; there are benchmark results supporting up to 486/day in GpuOwl; that's 2.67:1. And noted for its incomparable PRP performance, at $700 original list, IIRC beat by a factor of 2, $2500 used Tesla P100. Some of the Teslas may have sufficiently strong DP to show low ratios also. CPUs I've checked were 0.7 to 1.3. Hence the general rule, TF on GPUs, PRP or P1 or LL on CPUs. Except on Radeon VII and other recent AMD GPUs, & maybe Teslas. Last fiddled with by kriesel on 20210722 at 07:02 

20210723, 07:49  #21  
"David Kirkby"
Jan 2021
Althorne, Essex, UK
1C0_{16} Posts 
Quote:
https://www.mersenneforum.org/showpo...7&postcount=54 which I intend doing more thoroughly when 100% sober, I'm not convinced that any value of tests_saved can really be said to maximise the throughput unless you test the P1 timing on your computer(s). I tested the runtime of the P1 test on my Dell PC under the same circumstances
Given the optimal bounds for P1 tested are based on the calculated computational effort (GHz days), the tests_saved will not be optimal if the actual runtime of the test (in minutes) does not reflect the credit in GHz days. It changes where the optimal point is. Clearly that optimal point could depend on things such as
IMHO it is a shame so much effort (GHz days) is being put into testing exponents well away from the wavefront. They are not really helpful in finding primes. Last fiddled with by drkirkby on 20210723 at 08:20 

20210723, 08:42  #22  
Jun 2003
2·3^{2}·17^{2} Posts 
Quote:
However, the optimality calculations done by the program itself _do_ take in account the specifics of the new algo. TL;DR dont trust the GHzDay numbers. Last fiddled with by axn on 20210723 at 08:42 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Determine squares  fenderbender  Math  14  20070728 23:24 
determine  hyderman  Homework Help  7  20070617 06:01 
Methods to determine integer multiples  dsouza123  Math  6  20061118 16:10 
Help: trying to determine latency on movaps instructions on AthlonXP  LoKI.GuZ  Hardware  1  20040126 20:05 
How to determine the P1 boundaries?  Boulder  Software  2  20030820 11:55 