![]() |
I remember there was a link to James' analysis on optimal GPU TF bit levels somewhere. Is [URL="http://www.mersenne.ca/cudalucas.php"]this [/URL]it? And why then do the colors change at 150 and 300? Shouldn't they change at 100 and 200? Or taking LL errors into account, 105 and 210?
|
Assuming you talk about red/green table on the top (from your link, one has to click on any card he likes, then he gets the table), then yes, they DO break at 100 and 200. Or..hmm...well... at the closest possible of those...
Everything seems ok for me. |
[QUOTE=garo;313465]why then do the colors change at 150 and 300? Shouldn't they change at 100 and 200? Or taking LL errors into account, 105 and 210?[/QUOTE]Looking at a more specific example (doesn't matter much which GPU, I picked [url=http://www.mersenne.ca/cudalucas.php?model=12&granularity=1]GTX 580 @ 1M granularity[/url]).
The cutoff points [I]are[/I] 100% and 200%, but the chance of exactly "100" and/or "200" showing up on the graph is slim (out of 56 rows in the above example, only 46M and 74M come up exactly (after rounding) on "100", but even then don't hit "200" (actually 204, 202). So, to guarantee that one box is always coloured appropriately, I'm mapping anything in (0.75 < x < 1.50) as "100%", and (1.50 < x 3.00) as "200%". Due to the doubling nature of each successive column, only one column for any row will match the "100%" and "200%" colour range (although it's possible that it will miss a match, such as on 49M: 150/303). The whole issue is the rasterization of a curve. I could antialias the colours in the table, or present the whole thing as a image graph and it would appear less stepped. |
I see your point about needing to decide on some threshold. I was reading the graph vertically to determine appropriate cutoffs for each bit level. It probably makes more sense to read it horizontally to determine the optimal bit level for each range.
|
[QUOTE=garo;313600]I was reading the graph vertically to determine appropriate cutoffs for each bit level.[/QUOTE]Look at the far-right column, which is the the cutoff for 2 LL tests.
|
[QUOTE=James Heinrich;313611]Look at the far-right column, which is the the cutoff for 2 LL tests.[/QUOTE]
Oh! Silly me. Thanks for that. |
Quoting Firefox:
[SIZE=1]www.gpu72.com uses an invalid security certificate. The certificate is not trusted because it is self-signed. The certificate is only valid for Parallels Panel The certificate expired on 13.08.2012 12:17. The current time is 05.10.2012 09:10. [/SIZE] OTOH, gpu72.com returns "Page not found!" on all requests anyway ATM... |
[QUOTE=ckdo;313694]OTOH, gpu72.com returns "Page not found!" on all requests anyway ATM...[/QUOTE]It is fine here right now.
|
Any stats on 72-73 range for factors? How does current discovered ratio map to theoretical predictions.
For me, I'm finding 72-73 pretty bare (factor-wise). -- Craig |
[QUOTE=nucleon;313781]Any stats on 72-73 range for factors? How does current discovered ratio map to theoretical predictions.
For me, I'm finding 72-73 pretty bare (factor-wise). -- Craig[/QUOTE] The best I can provide is [URL="https://www.gpu72.com/reports/factor_percentage/"]https://www.gpu72.com/reports/factor_percentage/[/URL]. Overall, 72 -> 73 is providing a 1.285% success rate. Slightly better than expected. |
[QUOTE=chalsall;313782]The best I can provide is [URL]https://www.gpu72.com/reports/factor_percentage/[/URL].
Overall, 72 -> 73 is providing a 1.285% success rate. Slightly better than expected.[/QUOTE] I seem to run 8-10 factors behind prediction. Currently 91.717 predicted, 82 found. On the other hand, when I was doing DCTF I had 7.567 predicted and 16 found. There are bound to be ebbs and flows, aside from the required effort increasing for each level. Taking longer for each level makes the found factors seem more sparse. |
| All times are UTC. The time now is 23:16. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.