View Single Post
Old 2020-03-07, 17:42   #12
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

3×1,283 Posts
Default

Quote:
Originally Posted by EugenioBruno View Post
Is my interpretation of that sentence correct: "the people doing LL are going fast, so we're starting to fall behind in TFs that might avoid those LL checks from being done"?

This is confusing to me. What is a DC wavefront in relation to TFs? If my understanding is correct, TFs are done *before* any LL (or DC) test is done. I don't unrestand, what do TFs have to do at all with DCs?
First, all, please stop referring to LL as if PRP primality testing doesn't exist. PRP is the preferred primality test for new first-test assignments.

DC can be either LL or PRP. There is a several year backlog of LL DC and growing. There is a need for more PRP DC on which to base an estimate of PRP in the wild total error rate.

Before gpu TF, factoring depth was less by default for the same exponent than it is now. One can make a case for additional bit levels of factoring that may eliminate the need for a double check, in those cases. Before first time primality testing, finding a factor saves the first primality test and the double check primality test (and the occasional third and fourth or more). After first time primality testing, further trial factoring if any is weighed against the double check and occasional third or higher check.

When the state of the trial factoring software art or hardware efficiency changes, the optimal tradeoff of trial factoring versus double checking effort shifts.
There is a several year backlog of LL double check, and some of PRP double check.
The introduction of GPU TF raised the ideal TF bit level by about 4.
The RTX20xx and GTX16xx are enough more efficient at TF that they raise the ideal TF bit level one more.
To put it another way, the ratio of TF to primality or P-1 Ghd/day ratings is typically about 0.7 to 1.4 on cpus. On gpus, it's 10. to 40. or so, with ~16. being pretty common.
So as a result of all that history, a lot of DC candidates are still at the TF level that was determined optimal and applied many years ago, which became suboptimal.
These are being revisited and taken further in TF. See for example this detailed exponent report, on one I have reserved for DC: https://www.mersenne.org/report_expo...exp_hi=&full=1
Quote:
Also, my question about DC/LL percentage was in reference to my CPU worker.

Thanks! As you can see, I can contribute with cycles. Brain, not so much. :P
If you run about 20% DC, 80% first-test by cpu time, that is equivalent to about a 53M DC and a 103M first test in the same time frame, so is consistent with the current 8 year backlog. I encourage you to contribute more DC than that, to help keep the backlog from growing further. I've suggested that the project as a whole could refrain from issuing first-test assignments for one or two months of the year, to address the growing backlog. (Sort of spring and fall cleaning.) That idea is not very popular, although it doesn't reduce the rate of completed first tests much (8 or 17%), yet increases the rate of completed double checks a lot (60% or more).

Be patient with yourself. There's a big learning curve. One step at a time.
Contribute how you can and in the ways you enjoy.

Your choice of a GTX1650 is a good one. It's about as good a TF throughput per watt as can be found at reasonable price.
kriesel is offline   Reply With Quote