![]() |
[QUOTE=swl551;326123]Chris, should we continue processing this range or go back to our regular work?[/QUOTE]
I'd suggest that people shouldn't move too much firepower from LLTF to DCTF, but those who do regularly do DCTF (or LMH (hint... hint... :wink:)) should continue in the 31M range to 70 for the time being. Once we hear back from James we'll have a definitive answer, but it appears that 31M to 70 does make sense. 10 for 939 (1.065%) so far. |
Meh, my cpu's can do a DC faster than my GPU finding a factor in this range.
|
[QUOTE=kracker;326159]Meh, my cpu's can do a DC faster than my GPU finding a factor in this range.[/QUOTE]
Yes, that will be true for some GPU/CPU combinations. If 31M -> 70 doesn't make sense for you, move to somewhere else where it does. |
[QUOTE=Prime95;325832]As noted earlier, this chart assumes no P-1 has been done. It would be great if James could change the chart to show the DC breakeven point when P-1 has been performed[/QUOTE]I have updated my chart page to both be a little easier to read, and to address the above concerns:
[url]http://www.mersenne.ca/cudalucas.php?model=13[/url] The chart is now interactive: you can mouse-over any point and get the breakeven point at 1M granularity without having a big table of numbers. The data is now calculated based on the number of seconds to clear an exponent of that range:[list][*]TF 1st LL: time to run TF to this bit level multiplied by the probability of finding a factor (assuming no P-1 done)[*]TF 2nd LL: time to run TF to this bit level multiplied by the probability of finding a factor, with factor probability modified by assumption that P-1 was done and so TF factor probability reduced.[*]LL 1st test: time to run two LL tests[*]LL 2nd test: time to run one LL test (ignoring chance of non-matching residues)[/list]Note the last point: I don't take into account the fact that some percentage of L-L tests won't match residues, necessitating a 3rd (or more) L-L test. If someone can let me know what is the current average number of mismatched L-L tests I can factor that in. |
[QUOTE=James Heinrich;326167]The chart is now interactive: you can mouse-over any point and get the breakeven point at 1M granularity without having a big table of numbers.[/QUOTE]
[B][I][U]Very[/U][/I][/B] nice work!!! Thanks a lot! :smile: So, (assuming I'm reading this correctly) taking 31M to 70 is definitely profitable for higher-end cards. And 71 is slightly profitable for the very high-end cards. Because we're not doing as much LLTFing as we should at the moment, I'd suggest we just take 31M to 70, not 71. |
[QUOTE=chalsall;326170]So, (assuming I'm reading this correctly) taking 31M to 70 is definitely profitable for higher-end cards. And 71 is slightly profitable for the very high-end cards.[/QUOTE]At 31M for DC the cutoff point is TF to 2[sup]70.775[/sup] on CC 2.0, 2[sup]70.676[/sup] on CC 2.1, 2[sup]70.448[/sup] on CC 3.0. So definitely to 2[sup]70[/sup], to 2[sup]71[/sup] is debatable for CC 2.0 GPUs but isn't really worth it for CC 3.0. But as you said, better to spend the effort on LLTF rather than borderline DCTF.
[b]edit:[/b] but I don't want to read too much into TF cutoff points until I have data on percentage of LL tests needing a triple-check. |
[LEFT]That's some awesomesauce +1 right there, James.
[/LEFT] |
[QUOTE=James Heinrich;326167]I have updated my chart page to both be a little easier to read, and to address the above concerns:
[url]http://www.mersenne.ca/cudalucas.php?model=13[/url][/QUOTE] Very nice! Now we can make informed decisions :smile: As to triple-checking rates, 2% is probably a good first estimate. I think there are some posts in the Data subforum doing more rigorous analysis. |
Worker's Progress for last X is working great, :smile:
[COLOR=Gray][URL="https://www.gpu72.com/reports/workers/dc/week/"][SIZE=1]:P[/SIZE][/URL][/COLOR] |
[QUOTE=James Heinrich;326173]At 31M for DC the cutoff point is TF to 2[sup]70.775[/sup] on CC 2.0...[/QUOTE][QUOTE=Prime95;326219]As to triple-checking rates, 2% is probably a good first estimate.[/QUOTE]After adding in a correction factor on the assumption that 2% of exponents will require a triple-check, the breakeven numbers vary but slightly. For example 31M DCTF breakeven point on CC 2.0 shifts from 2[sup]70.775[/sup] to 2[sup]70.803[/sup].
|
Chalsall, what kind of sample size are you looking for in the dctf 31M to 70 project? aka, we pretty much good to get back to regular lltf work, need another day, or aim for 5k or so? Seems like we churned out about 1200 yesterday.
|
| All times are UTC. The time now is 23:17. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.