![]() |
When calling the siever directly 2^A is the sieve region. A=31 defaults to I=16 A=29 to I=15 etc. A=2*I-1
I think that A=32 will be 2^16 by 2^16. It is twice the region of A=31 in any case. A=32 is more manageable than I=17 memory wise so it is an option for low q sieving to get more yield. sieve.adjust_strategy is different strategies for selection of I and J in 2^I by 2^J given A=I+J. It is described in las -h [QUOTE] -adjust-strategy strategy used to adapt the sieving range to the q-lattice basis (0 = logI constant, J so that boundary is capped; 1 = logI constant, (a,b) plane norm capped; 2 = logI dynamic, skewed basis; 3 = combine 2 and then 0) ; default=0[/QUOTE] 3 can be better sometimes but can use more memory. It is a potentially useful option for someone with a low yield or if you want to focus sieving on less qs. I would suggest some experimentation with this may be worthwhile. It may speed up some sizes for some q(which q might be an unanswered research question) Is it getting to the point where NFS@Home should be looking at switching to the CADO siever? |
This post to the CADO-NFS list seems very slightly relevant.
[quote]It is persisting. ./cado-nfs.py <309 digit RSA number> -t 60 I cannot provide the exact number per my work agreement. Best Regards, -- Justin Granger [/quote] Someone trying polynomial searching on a kilobit composite. |
[QUOTE=R.D. Silverman;531875]Nice. But reading the file requires knowledge of some app specific syntax.[/QUOTE]
Knowledge not required, just a dose of enough common sense logic that others in this thread would inevitably grasp :hello: |
[QUOTE=VBCurtis;531861]900 core-years computation time (800 sieve, 100 matrix) on 2.1ghz Xeons gold. They observe this job ran 3x faster than would be expected from an extrapolation from RSA-768, and in fact would have been 25% faster on identical hardware than RSA-768 was.
I'd love a more detailed list of parameters! Perhaps a future CADO release will include them in the c240.params default file. :) For comparison, we ran a C207 Cunningham number 2,2330L in about 60 core-years sieve, which scales really roughly to an estimate of 3840 core-years sieve (6 doublings at 5.5 digits per doubling). The CADO group found a *massive* improvement in sieve speed for large problems! 4 times faster, wowee. Edit: Their job is so fast that RSA-250 is easily within their reach. Which means that C251 from Euclid-Mullen is within reach, theoretically. I mean, imagine if all NFS work over 200 digits is suddenly twice as fast.....[/QUOTE] [QUOTE]...imagine if all NFS work over 200 digits is suddenly twice as fast.....[/QUOTE] I certainly wouldn't mind re-factoring RSA-200 again, which took a little less than 7 months - IF utilizing similar parameter upgrades would possibly double the speed. :devil: |
The link just went down. I'm guessing it's due to high traffic.
|
A copy of the announcement has been saved in the Internet Archive: [url]http://web.archive.org/web/20191203150058/https://lists.gforge.inria.fr/pipermail/cado-nfs-discuss/2019-December/001139.html[/url]
|
Some additional details about the RSA-240 factorization, as well as the discrete log done at the same time can be found at:
[url]https://eprint.iacr.org/2020/697[/url] |
[QUOTE=Branger;553115]Some additional details about the RSA-240 factorization, as well as the discrete log done at the same time can be found at:
[url]https://eprint.iacr.org/2020/697[/url][/QUOTE] This paper also exhibits the factors of RSA-250!! Parameters were nearly the same as for RSA-240, except for increasing sieve region from A=32 to A=33 (a doubling of sieve area, equivalent to using a mythical 17e on GGNFS). Still 2LP on one side, 3 on the other. Lim's were 2^31. Only 8.7G raw relations were needed, 6.1G unique!! They cite 2450 Xeon-Gold-2.1Ghz core-years sieving, 250 core-years matrix for 405M matrix size. |
[QUOTE=VBCurtis;553137]Parameters were nearly the same as for RSA-240, except for increasing sieve region from A=32 to A=33 (a doubling of sieve area, equivalent to using a mythical 17e on GGNFS).[/QUOTE]
Tried this out - a single instance of las with their parameters uses 38GB of memory :max: |
That makes you think how big is the LA machine. Only a few people around here can accommodate a 40M matrix let alone a 405M matrix!!
|
Same way Greg does- the supercomputing grids used by the CADO team for these factorizations can handle jobs such as a matrix distributed over many nodes. The paper includes a summary of the number of nodes used for each step of the RSA-240 matrix.
I'm not aware of filtering being split over multiple nodes, so that is the part that needs the largest-memory machine, and that likely fit in 256GB (perhaps 384). |
| All times are UTC. The time now is 20:18. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.