![]() |
[QUOTE=VBCurtis;427989]I ask GMP-ECM, by comparing the curve counts at a specific B1 value required for a t55 and a t60.
At B1 = 260M, a t55 is 8656 curves while a t60 is 47888 curves. So, when one completes the 47888 curves for a t60, one has done 47888/8656 = 5.5ish t55. At B1 = 110M, a t55 is 20479 curves, while a t60 is 128305 curves. So, when one completes 20479 curves for a t55, one has completed 20479/128305 of a t60, or just under 1/6th. To estimate t53 and such, I use a geometric interpolation: if ECM effort doubles every two digits, 5 digits higher would be 2.5 doublings. 2^2.5 is around 5.7, right in between the two ratios mentioned above. So, a rough guide for estimating t53 is half a t55, t57 is 2*t55, t59 is 2*t57 is 4*t55, and t60 is sqrt2*t59 is 4sqrt2 * t55 is 5.7*t55. Again, the 5.7 is a rounded version of the first calculation I did. The ratio changes for various B1 values, but not by much. For instance, I use B1 = 150M to run my t55s; a t55 is 14356 curves, whlie a t60 is 86058. On my machine, 14356 curves at 150M run faster than 20479 curves at 110M, while also completing a teensy bit more of a t60 than the set of curves at 110M (i.e. there's a bit more chance of a "bonus" factor above t55).[/QUOTE] Okay, nice. I've settled on 2^(5/2) as the ratio, and adjusted my code accordingly: [pastebin]rbyh9SKT[/pastebin] Which has the following results: [code]In [1]: run ecmnfs.py 60 325000 With roughly 10.70% odds of a factor existing, you should do 34311 curves at 60 digits before switching to NFS In [2]: run ecmnfs.py 65 325000 Not even one curve should be done, forget about ECM at 65 digits. In [3]: run ecmnfs.py 60 330000 With roughly 10.70% odds of a factor existing, you should do 35739 curves at 60 digits before switching to NFS In [4]: run ecmnfs.py 60 335000 With roughly 10.70% odds of a factor existing, you should do 37167 curves at 60 digits before switching to NFS In [5]: run ecmnfs.py 60 340000 With roughly 10.70% odds of a factor existing, you should do 38595 curves at 60 digits before switching to NFS In [6]: run ecmnfs.py 60 345000 With roughly 10.70% odds of a factor existing, you should do 40023 curves at 60 digits before switching to NFS In [7]: run ecmnfs.py 60 350000 With roughly 10.70% odds of a factor existing, you should do 41450 curves at 60 digits before switching to NFS In [8]: run ecmnfs.py 60 320000 With roughly 10.70% odds of a factor existing, you should do 32883 curves at 60 digits before switching to NFS In [9]: run ecmnfs.py 60 315000 With roughly 10.70% odds of a factor existing, you should do 31456 curves at 60 digits before switching to NFS In [10]: run ecmnfs.py 60 310000 With roughly 10.70% odds of a factor existing, you should do 30028 curves at 60 digits before switching to NFS In [11]: run ecmnfs.py 60 305000 With roughly 10.70% odds of a factor existing, you should do 28600 curves at 60 digits before switching to NFS In [12]: run ecmnfs.py 60 300000 With roughly 10.70% odds of a factor existing, you should do 27172 curves at 60 digits before switching to NFS[/code] Full code here: [url]https://gist.github.com/dubslow/916cb29e30277380c9f2[/url] |
This code is so far advanced from the decimal ratio rules of thumb- thanks! Great work.
The experts (RDS et al) used to tell us plebes that the optimal ECM ratio decreased as the composite size went up; your applet demonstrates that concretely. :tu: |
[QUOTE=Dubslow;428045]By my analysis, we should do no ECM at t65, and in fact the utility of a full t60 is in question, though I guess we'll likely wind up doing a full t60, or perhaps change the last few K of the t60 into higher level curves. I would argue that at the higher end at least, that XYYXF table is skewed pretty heavily into the too-much-ECM territory.
Yes, the gatekeepers are who we ultimately need please.[/QUOTE] Ok, so ECM stops after a full t60. Unless the gatekeeper says otherwise. I agree the table in my previous post is skewed towards too much ECM. I just linked it for comparison purposes. |
I'll run some ECM with higher B1.
|
3 Attachment(s)
I've found a bug, namely that [c]while x1 - x0 > 0.5[/c] really should have been [c]while abs(x1 - x0) > 0.5[/c]. It doesn't really affect the current result, but for situations where the solution was greater than median curves, the code didn't work.
[url]https://gist.github.com/dubslow/916cb29e30277380c9f2[/url] As a bonus, see the attached figures. Note that these should not be used for anything other than C195s, because the hours_per_curve estimate varies with composite size. [code]In [85]: run ecmnfs.py 60 325000 With ~10.70% odds of a factor existing and ~5.95% net odds of success, you should do 34141 curves at 60 digits before switching to NFS [/code] Edit: See the second set of figures to digest the futility of curves at 65 digits. Edit2: And the 55 digit graphs, for good measure. I'm pretty sure I'm on to something here... :smile: |
I finished 17715 curves @11e7 ([url]http://www.rechenkraft.net/yoyo/y_status_ecm.php[/url]). It will continue until I reach 18000 curves.
What is now the conclusion, how many curves are required @ 26e7? |
40K or 42K or whatever is more than enough, take your pick.
|
Let's say 21K curves @ 260e6 = 1/2 t60, pending approval by NFS@Home. That's still a bit more than optimal, though of course "optimal" has a wide margin of error.
|
[QUOTE=Dubslow;428598]Let's say 21K curves @ 260e6 = 1/2 t60, pending approval by NFS@Home. That's still a bit more than optimal, though of course "optimal" has a wide margin of error.[/QUOTE]
[URL="http://www.rechenkraft.net/yoyo/y_status_ecm.php"]Nearly done[/URL]. |
I finished 2000 curves @ B1=1e9. For count thread-hours: each curve took ~8000s and 2.5Gb of memory on Xeon E5-2620.
|
What is status of this C195? Has enough ECM been run? I know a [url=http://www.mersenneforum.org/showpost.php?p=430247&postcount=544]
poly was found[/url]. Anyone still searching for a better poly? Or are we ready for the 15e queue? |
| All times are UTC. The time now is 23:04. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.