20220402, 21:38  #12 
"Ed Hall"
Dec 2009
Adirondack Mtns
1010010010101_{2} Posts 
Today is not turning ot to be a "better day!" I'm causing duplication of wok in another thread, and now I'm finding out that if CADONFS is told to stop prior to its filtering, it doesn't give a tme for las. I will have to sort out something else. For now, the c160 will not be useful for much. I might just take a break. . .

20220402, 22:45  #13 
"Ed Hall"
Dec 2009
Adirondack Mtns
11×479 Posts 
Maybe I've found a solution. Here's the data for the c160:
Code:
N = 516... <160 digits> tasks.I = 14 tasks.lim0 = 45000000 tasks.lim1 = 70000000 tasks.lpb0 = 31 tasks.lpb1 = 32 tasks.qmin = 17000000 tasks.filter.target_density = 150.0 tasks.filter.purge.keep = 190 tasks.sieve.lambda0 = 1.84 tasks.sieve.mfb0 = 59 tasks.sieve.mfb1 = 62 tasks.sieve.ncurves0 = 20 tasks.sieve.ncurves1 = 25 tasks.sieve.qrange = 10000 Polynomial Selection (size optimized): Total time: 505021 Polynomial Selection (root optimized): Total time: 26267.6 Lattice Sieving: Total number of relations: 212441669 Lattice Sieving: Total time: 3.16477e+06s (all clients used 4 threads) Found 149733097 unique, 40170110 duplicate, and 0 bad relations. cownoise Best MurphyE for polynomial is 1.43789954e12 
20220405, 14:03  #14 
"Ed Hall"
Dec 2009
Adirondack Mtns
12225_{8} Posts 
Here's a c162:
Code:
N = 235... <162 digits> tasks.I = 14 tasks.lim0 = 45000000 tasks.lim1 = 70000000 tasks.lpb0 = 31 tasks.lpb1 = 32 tasks.qmin = 17000000 tasks.filter.target_density = 150.0 tasks.filter.purge.keep = 190 tasks.sieve.lambda0 = 1.84 tasks.sieve.mfb0 = 59 tasks.sieve.mfb1 = 62 tasks.sieve.ncurves0 = 20 tasks.sieve.ncurves1 = 25 tasks.sieve.qrange = 10000 Polynomial Selection (size optimized): Total time: 508246 Polynomial Selection (root optimized): Total time: 25518.1 Lattice Sieving: Total time: 3.77171e+06s (all clients used 4 threads) Lattice Sieving: Total number of relations: 218448391 Found 149733097 unique, 40170110 duplicate, and 0 bad relations. cownoise Best MurphyE for polynomial is 1.16869325e12 
20220409, 00:13  #15 
"Ed Hall"
Dec 2009
Adirondack Mtns
12225_{8} Posts 
Here's a c168:
Code:
N = 385... <168 digits> tasks.I = 14 tasks.lim0 = 65000000 tasks.lim1 = 100000000 tasks.lpb0 = 31 tasks.lpb1 = 31 tasks.qmin = 10000000 tasks.filter.target_density = 170.0 tasks.filter.purge.keep = 160 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 60 tasks.sieve.ncurves0 = 19 tasks.sieve.ncurves1 = 25 tasks.sieve.qrange = 5000 Polynomial Selection (size optimized): Total time: 999726 Polynomial Selection (root optimized): Total time: 6873.68 Lattice Sieving: Total time: 6.3694e+06s (all clients used 4 threads) Lattice Sieving: Total number of relations: 179907757 Found 149733097 unique, 40170110 duplicate, and 0 bad relations. cownoise Best MurphyE for polynomial is 5.83275752e13 
20220418, 17:50  #16 
"Ed Hall"
Dec 2009
Adirondack Mtns
11×479 Posts 
Here's a c161:
Code:
N = 235... <161 digits> tasks.I = 14 tasks.lim0 = 45000000 tasks.lim1 = 70000000 tasks.lpb0 = 31 tasks.lpb1 = 32 tasks.qmin = 17000000 tasks.filter.target_density = 150.0 tasks.filter.purge.keep = 190 tasks.sieve.lambda0 = 1.84 tasks.sieve.mfb0 = 59 tasks.sieve.mfb1 = 62 tasks.sieve.ncurves0 = 20 tasks.sieve.ncurves1 = 25 tasks.sieve.qrange = 10000 Polynomial Selection (size optimized): Total time: 493855 Polynomial Selection (root optimized): Total time: 27925.7 Lattice Sieving: Total time: 2.84944e+06s (all clients used 4 threads) Lattice Sieving: Total number of relations: 202173233 Found 149733097 unique, 40170110 duplicate, and 0 bad relations. cownoise Best MurphyE for polynomial is 1.47600121e12 
20220420, 21:51  #17 
"Ed Hall"
Dec 2009
Adirondack Mtns
11×479 Posts 
I'm currently running a c164 with A = 28 and adjust_strategy = 2. Will the data from this one compare to the data from others. What additional things might I need to mention, if any?

20220420, 23:14  #18 
"Curtis"
Feb 2005
Riverside, CA
5643_{10} Posts 
I don't think any. We're looking for deviances from the trendline of "twice as hard every 5.5 digits". When a job is above that trend, it's a sign the params for that job size might benefit from some more attention.
At least, that's what I've found working my way up from 95 to 150 digits. Charybdis occasionally runs two jobs very similar in length with one setting changed between them, as an A/B comparison to determine which setting is better. This is timeconsuming at 160+, but it's the sort of work that lets us refine the params set. For instance, we *still* don't have a clear idea of when 3LP pulls ahead of 2LP for CADO. In principle, there should be a single cutoff above which we always use 3LP. If you find yourself running a second job in the 160s the same size as one you've already documented, give 3LP a shot (I can be more specific on settings if you like). Last fiddled with by VBCurtis on 20220420 at 23:16 
20220421, 00:12  #19 
"Ed Hall"
Dec 2009
Adirondack Mtns
1010010010101_{2} Posts 
I've got a pretty large pool of numbers I'm playing with. If you can get me specific params you'd like me to use, I'll try them on another 164 digit or close. The current one is 345. . . I've got about a dozen to check for something close, that I'll hope (ironically) doesn't ECM. I have no idea how to use 3LP, so please be quite specific in what I should do.

20220421, 14:24  #20 
"Ed Hall"
Dec 2009
Adirondack Mtns
1495_{16} Posts 
Of course that meant that all the c164s are falling to ECM, now.* If I don't find a suitable c164, would you prefer I move up or down a digit? The leading digits are 345... on the current one, which should be finished tomorrow.
* The best way to get them to succeed at ECM is to look for GNFS candidates, unless you actually try that. . . Edit: By posting the above I hope that the final c164 will fail ECM. But then because I posted such, it will succeed. But. . . Last fiddled with by EdH on 20220421 at 14:31 
20220422, 01:57  #21 
"Curtis"
Feb 2005
Riverside, CA
3^{3}×11×19 Posts 
Here's my 3LP settings for params.c165, tested exactly once:
Code:
tasks.I = 14 tasks.qmin = 10000000 tasks.lim0 = 40000000 tasks.lim1 = 60000000 tasks.sieve.lambda0 = 1.83 tasks.lpb0 = 31 tasks.lpb1 = 31 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 88 tasks.sieve.ncurves0 = 18 tasks.sieve.ncurves1 = 10 tasks.sieve.qrange = 5000 tasks.sieve.rels_wanted = 175000000 3LP makes the sieve faster, at the expense of a jump in matrix size. It's not to our benefit to log a 10% improvement in sieve time if we lose 50% to matrix time! Hopefully that's an exaggeration, but that's why we take data. 
20220422, 02:05  #22 
Apr 2020
929 Posts 
Are you sure that swapping the lims won't improve yield? I thought larger lim on the 2LP side was pretty well established by now. Too lazy to dig up an old polynomial and testsieve it myself.

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
CADO help  henryzz  CADONFS  6  20220913 23:11 
CADO NFS  Shaopu Lin  CADONFS  522  20210504 18:28 
CADONFS Timing Data For Many Factorizations  EdH  EdH  8  20190520 15:07 
CADONFS  skan  Information & Answers  1  20131022 07:00 
CADO  R.D. Silverman  Factoring  4  20081106 12:35 