20191202, 17:01  #1 
May 2018
43_{10} Posts 
RSA240 and RSA250 Factored!!

20191202, 18:09  #2 
"Curtis"
Feb 2005
Riverside, CA
5^{3}·37 Posts 
900 coreyears computation time (800 sieve, 100 matrix) on 2.1ghz Xeons gold. They observe this job ran 3x faster than would be expected from an extrapolation from RSA768, and in fact would have been 25% faster on identical hardware than RSA768 was.
I'd love a more detailed list of parameters! Perhaps a future CADO release will include them in the c240.params default file. :) For comparison, we ran a C207 Cunningham number 2,2330L in about 60 coreyears sieve, which scales really roughly to an estimate of 3840 coreyears sieve (6 doublings at 5.5 digits per doubling). The CADO group found a *massive* improvement in sieve speed for large problems! 4 times faster, wowee. Edit: Their job is so fast that RSA250 is easily within their reach. Which means that C251 from EuclidMullen is within reach, theoretically. I mean, imagine if all NFS work over 200 digits is suddenly twice as fast..... Last fiddled with by VBCurtis on 20191202 at 18:12 
20191202, 18:40  #3  
Nov 2003
1D24_{16} Posts 
Quote:
If going after a ~C250, I think 2,1139+ is a better target. It's been waiting for nearly 60 years to be factored. The EuclidMullen cofactor is a relative newcomer. Of course I am biased towards Cunningham numbers. I'd love to hear the implementation/parameterization details that resulted in their terrific speed improvement. 

20191202, 19:32  #4 
Sep 2010
So Cal
32_{16} Posts 
New parameter file posted for C240
Here is a link to the new C240 parameter file, posted by Paul Zimmermann about 9 hours ago. https://gforge.inria.fr/scm/browser.php?group_id=2065

20191202, 20:35  #5  
Nov 2003
2^{2}·5·373 Posts 
Quote:
Last fiddled with by R.D. Silverman on 20191202 at 20:35 

20191202, 22:52  #6 
"Curtis"
Feb 2005
Riverside, CA
5^{3}×37 Posts 
For those curious, but not curious enough to git:
Poly select used some different parameters, notably incr of 110880 (similar to tests Gimarel has run with msieve) and admax (that is, c6 max) of 2e12. CADOspecific params: P 20 million, nq 1296. I bet they'd have betteryet performance with nq of 7776 and admax around 3e11, for the same search time. sizeopteffort was set to 20; I've not seen this set on any previous params file. sieve params: factor base bounds of 1.8e9 and 2.1e9. LP bounds of 36/37, mfb 72 and 111 (exactly 2x and 3x LP, so 3LP on one side) lambda values specified at 2.0 and 3.0, respectively Q from 800 million up. CADOspecific: ncurves0 of *one hundred*. Typical previously was 25 or so. ncurves1 of 35, typical previous was 15 or so. tasks.A = 32; this is a new setting not in my ~Sep'19 git version of CADO. This "A" setting, combined with much higher ncurves, appears to be where the new speed is found. Matrix: target density 200 (170 was prior standard). This number is about 50 higher than msieve's version of this setting, so this corresponds to target density of ~150. Not that crazy for a C240. I checked the params.c90 file, which is where the CADO team explains the meaning of each setting. No mention of "A". However, there is a new setting: tasks.sieve.adjust_strategy = 0 # this may be 0, 1, 2. 0 is default. They note that 1 or 2 may require more work than 0, but give more relations. I checked a handful of other params files in today's git clone, no other use of "A". EDIT: Aha! tasks.I is not set in the new c240 file; I is the equivalent of siever size (e.g. I=16). tasks.A is set instead. I imagine these are related. Last fiddled with by VBCurtis on 20191202 at 23:03 
20191202, 23:22  #7  
Nov 2003
2^{2}×5×373 Posts 
Quote:
Quote:
I presume the rational side is 2 and the algebraic 3? Quote:
B1/B2? Quote:
Also, what was the sieve area per Q? Was it constant? Or larger for smaller Q? Did they consider trying smaller Q than 800M? How many total Q? What was the average yield per Q? 

20191202, 23:53  #8  
"Curtis"
Feb 2005
Riverside, CA
5^{3}×37 Posts 
Quote:
35 curves tried on the algebraic side; I believe the logic is that many of the cofactors won't split into 3 factors, so one doesn't try as hard to split the 3LP side. I believe the ECM bounds increase each trial, but the details are not documented anywhere I've seen; perhaps one would need to review the code (or ask the mailing list?) to discover these details. mfb has the same meaning as it does for GGNFS jobs. 

20191203, 00:02  #9 
Nov 2003
16444_{8} Posts 

20191203, 00:06  #10 
Nov 2003
2^{2}·5·373 Posts 

20191203, 00:11  #11 
"Curtis"
Feb 2005
Riverside, CA
1211_{16} Posts 
Sure, but I must leave it to someone more wellversed in NFS to reply.
I believe it's the size in bits of cofactors to be fed to ECM to be split. I believe CADO uses lambda to also control the size of cofactors, as lambda * LP size. This provides finergrained control over cofactor size; however, I am not sure about this. I tried today's git on a small job with tasks.I = 12 replaced with tasks.A = 24 (or 32), and got an error that tasks.I was not specified. This means the params.c240 file posted today by PaulZ would not actually run, I think. I was hoping/speculating that tasks.A was a measure of sieve area, and that CADO might now vary the dimensions of the area intelligently. I = 16 corresponds to 2^16 by 2^15 for the sieve region, akin to 16e GGNFS. Alas, a mystery we await an answer for! 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
factored mp / (mp < 100.000.000 )  bhelmes  Data  3  20180928 18:31 
10^224 + 3 factored  2147483647  Factoring  0  20161231 16:22 
Factored vs. Completely factored  aketilander  Factoring  4  20120808 18:09 
F33 is factored !!  Raman  Factoring  4  20100401 13:57 
RSA100 factored!  ewmayer  Math  5  20030514 15:08 