View Single Post
Old 2018-02-05, 03:53   #2
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

10000100011012 Posts
Default

I've been working on tuning CADO parameters, and have found settings that run 15-30% faster than the 2.3.0 release defaults on numbers from c95 to c125. With these faster settings, I timed CADO 2.3.0 vs factmsieve.py on RSA-120.

CADO took 73,000 CPU-seconds on a 6-core i7 using 12 threads for all stages. Wall-clock time was roughly 10,500 seconds (I neglected to set a timer, so accurate within a couple minutes). Dividing the two times shows that hyperthreading is worth approx. one core's worth of work, as CPU time is 7x wall clock time.

CADO spent 3500 thread-seconds on poly select, so I allowed msieve the same time on a single-threaded process. I don't have an msieve-functional GPU presently. I then set factmsieve with 12 threads of sieve and 6 threads of post-processing; 80 minutes of sieve and 20 min postprocessing later, the factorization was complete. If we imagine that poly select could be conveniently run 6-threaded, that's 110 minutes for GGNFS vs 175 minutes CADO.

From 97 to 133 digits, a sample of best-result CADO times shows the software scaling as every 5.9 digits = a doubling of total time. I haven't recorded a similar map of GGNFS times, though the rule of thumb has always been 5 digits = doubling of sieve time, so perhaps CADO catches up in speed at higher difficulties.

I am not confident that I've found strong parameters for C140 on CADO yet, but I may repeat the test on RSA-140 once I think I've made the best of CADO.
VBCurtis is offline   Reply With Quote