![]() |
Lol! You actually had me going for a second there. James' graph is a curve. The bottom end maps to 1.25, the top end maps to 1.3. Yes, there is a point on the curve where the 1 bitlevel increment maps to 1.26. We can call that the troll point on the graph. Good try.
|
[QUOTE=chalsall;336948]We should have been going to 75 from 57M onward; we simply didn't have the firepower to do so.[/QUOTE]
I'm eyeballing that graph. It looks like 75 is warranted for ~65M an above, and 74 at ~54M. Not so? [QUOTE=Aramis Wyler;336959]Lol! You actually had me going for a second there. James' graph is a curve. The bottom end maps to 1.25, the top end maps to 1.3. Yes, there is a point on the curve where the 1 bitlevel increment maps to 1.26. We can call that the troll point on the graph. Good try.[/QUOTE] 1.26 is a rule-of-thumb. And it is a pretty good ROT. There is a sound mathematical basis for that ROT. I'll take your word that the actual ratio is in the 1.25-1.3 range -- I'd consider that as excellent conformance to the ROT value. |
[QUOTE=axn;336960]I'm eyeballing that graph. It looks like 75 is warranted for ~65M an above, and 74 at ~54M. Not so?[/QUOTE]
I oversimplified by rounding. But, to reach a "perfect" cut-off, we'd either do (as an example) 57M to 74.5065 "bits" (which mfaktx [I]could[/I] do, but Primenet doesn't handle non-integer bit levels), or else we'd take approximately half the candidates to 74, and the other half to 75. |
Maybe it is a reasonable rule of thumb, but we're not exactly working the numbers out in our heads as we walk down the streets. Why would we dogmatically set a line using a rule of thumb for a site that is doing terraflops of calculations? We have the processing power to work out an actual rule.
Besides, he uses that number to prove we're overfactoring the current range, not to prove that we have in fact underfactored what came before wrt our current firepower or to prove that the bitdepth we're working on should vary according to the firepower we have at any given time. |
[QUOTE=chalsall;336962]I oversimplified by rounding. But, to reach a "perfect" cut-off, we'd either do (as an example) 57M to 74.5065 "bits" (which mfaktx [I]could[/I] do, but Primenet doesn't handle non-integer bit levels), or else we'd take approximately half the candidates to 74, and the other half to 75.[/QUOTE]
Perfection is a bit more complicated to obtain than James' graph. Because of P-1, one should TF somewhat less than James' recommendation (because a factor found by TF won't always save 2 LL tests, sometimes it only saves a single P-1 run). IIRC, James' chart does account for an ~2% LL error rate. Can we also tweak it to account for P-1: ~5% of the time a factor only saves ~3% of an LL test? Attention P-1ers, are the 5% and 3% accurate estimates? Can we add a footnote to James' page explaining the breakeven formula - I know I will eventually forget this conversation :) |
[QUOTE=Aramis Wyler;336963]Maybe it is a reasonable rule of thumb, but we're not exactly working the numbers out in our heads as we walk down the streets.[/QUOTE]Speak for yourself. That is exactly the sort of thing I do, giving me an overall view of the situation that won't be too far from a narrower, short term, unjustifiably precise approach, and easy to grasp.
The ROT arises as follows: 1) the time to TF from 73 to 74 is proportional to 1/expo 2) the time to TF from 73 to 74 is double that for 72 to 73 for a given expo. 3) The time for an LL test is proportional to expo[SUP]2[/SUP]. If the exponent increases by 2[SUP]1/3[/SUP] (=1.26) the times for TF and LL are both increased by 2[SUP]2/3[/SUP]. I am full of neat tricks like this, which is where I get my confidence that I am right when I make firm assertions. Another example which seems to be going over people's heads ATM, is that a 10% advance in the wavefronts per year results in one new prime per 4 years on average (as currently observed to be the case) and requires a modest 1.1[SUP]3[/SUP] = 4/3 increase in computing per year. David |
[QUOTE=davieddy;336972]3) The time for an LL test is proportional to expo[SUP]2[/SUP].[/quote]
Very close: FFT = n log n LL = n FFTs = n[sup]2[/sup] log n [quote] Another example which seems to be going over people's heads ATM, is that a 10% advance in the wavefronts per year results in one new prime per 4 years on average[/QUOTE] This is irrelevant to GIMPS. We don't have a schedule, we find primes at whatever rate our compute power and luck allow. |
[QUOTE=Prime95;336969]Perfection is a bit more complicated to obtain than James' graph. Because of P-1, one should TF somewhat less than James' recommendation (because a factor found by TF won't always save 2 LL tests, sometimes it only saves a single P-1 run). [/QUOTE]
Good point (as always). [QUOTE=Prime95;336969]Attention P-1ers, are the 5% and 3% accurate estimates?[/QUOTE] I can't speak to the latter, [URL="https://www.gpu72.com/reports/factoring_cost/p-1/"]but as to the 5%:[/URL] [CODE]TF: 72 -- 1,297 / 31,646 == 4.098% TF: 73 -- 1,415 / 35,945 == 3.937% TF: 74 -- 39 / 1,263 == 3.088% (small sample set currently)[/CODE] I know James has a much more complete dataset. |
[QUOTE=Prime95;336969]Because of P-1, one should TF somewhat less than James' recommendation (because a factor found by TF won't always save 2 LL tests, sometimes it only saves a single P-1 run).[/QUOTE]
Thank you for weighing in; it's always a help to have the big guns clarify things. :) I'm curious about the statement though - with the graph having one line for first times and a second for DCs, I thought it accounted for the amount saved. |
[QUOTE=chalsall;336949]Frankly, this is obviously futile. You're now on my "ignore" list.[/QUOTE][url=http://www.youtube.com/watch?v=S5P63qGTm_g][B]QUEEN BITCH[/B][/url]
|
[QUOTE=Aramis Wyler;336977]I'm curious about the statement though - with the graph having one line for first times and a second for DCs, I thought it accounted for the amount saved.[/QUOTE]
What George is pointing out is James is [I][U]not[/U][/I] [URL="http://www.mersenneforum.org/showpost.php?p=326167&postcount=1797"]taking into account[/URL] that something like ~4% of the time finding a factor will not save 2 LLs (for the first-time LL wave), since a following P-1 run (almost always done) will find a factor. Knowing James, he's take this into account sometime soon -- it will only slightly affect the optimal cross-over point though. |
| All times are UTC. The time now is 09:40. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.