![]() |
![]() |
#34 | |
"Ed Hall"
Dec 2009
Adirondack Mtns
2×2,609 Posts |
![]() Quote:
(For some reason, I had it in the back of my head that strategy=2 wouldn't work with I=14.) |
|
![]() |
![]() |
![]() |
#35 | |
Apr 2020
13·71 Posts |
![]() Quote:
@EdH: I think you might need to take a look at your script, as all your summaries seem to include Code:
Found 149733097 unique, 40170110 duplicate, and 0 bad relations. |
|
![]() |
![]() |
![]() |
#36 |
"Ed Hall"
Dec 2009
Adirondack Mtns
2×2,609 Posts |
![]()
Indeed! I can't find where the report gets written in any of my scripts, but I do have a file with those values from sometime, that I harvest for each run. Thanks for catching that. I will definitely have to work on it. I might have to skip remdups4 and let Msieve report duplication and harvest the values from there.
|
![]() |
![]() |
![]() |
#37 |
"Ed Hall"
Dec 2009
Adirondack Mtns
121428 Posts |
![]()
OK, I'm losing it! The new candidate is the one I just factored. It got mixed into the list because it wasn't finished yet. I need to do some more work before I get to the next candidate.
|
![]() |
![]() |
![]() |
#38 |
"Ed Hall"
Dec 2009
Adirondack Mtns
2×2,609 Posts |
![]()
I have a c164 underway:
Code:
N = 712...<164 digits> tasks.I = 14 tasks.lim0 = 60000000 tasks.lim1 = 40000000 tasks.lpb0 = 31 tasks.lpb1 = 31 tasks.qmin = 10000000 tasks.sieve.adjust_strategy = 2 tasks.sieve.lambda0 = 1.83 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 88 tasks.sieve.ncurves0 = 18 tasks.sieve.ncurves1 = 10 tasks.sieve.qrange = 5000 tasks.sieve.rels_wanted = 175000000 |
![]() |
![]() |
![]() |
#39 |
"Ed Hall"
Dec 2009
Adirondack Mtns
2·2,609 Posts |
![]()
Here's the next c164 (I=14 and adjust_strategy=2):
Code:
N = 712... <164 digits> tasks.I = 14 tasks.lim0 = 60000000 tasks.lim1 = 40000000 tasks.lpb0 = 31 tasks.lpb1 = 31 tasks.qmin = 10000000 tasks.filter.target_density = 170.0 tasks.filter.purge.keep = 160 tasks.sieve.adjust_strategy = 2 tasks.sieve.lambda0 = 1.83 tasks.sieve.mfb0 = 58 tasks.sieve.mfb1 = 88 tasks.sieve.ncurves0 = 18 tasks.sieve.ncurves1 = 10 tasks.sieve.qrange = 5000 Polynomial Selection (size optimized): Total time: 524425 Polynomial Selection (root optimized): Total time: 30333.8 Lattice Sieving: Total time: 4.46548e+06s (all clients used 4 threads) Lattice Sieving: Total number of relations: 175001545 Found 122488916 unique, 45564734 duplicate, and 0 bad relations. cownoise Best MurphyE for polynomial is 8.11818879e-13 |
![]() |
![]() |
![]() |
#40 |
"Curtis"
Feb 2005
Riverside, CA
32×54 Posts |
![]()
Poly score 2.5% worse, but sieve time roughly 5% better. Nice!
The next settings to test are A=28 and mfb1 = 89. A=28 is more important a test (mfb should not change sieve time very much). |
![]() |
![]() |
![]() |
#41 |
"Ed Hall"
Dec 2009
Adirondack Mtns
2·2,609 Posts |
![]() |
![]() |
![]() |
![]() |
#42 |
"Curtis"
Feb 2005
Riverside, CA
32×54 Posts |
![]()
I think / hope each change should be independent- that is, you have determined strat 2 is faster (really, Charybdis determined this over a year ago), now it's the default. Next, try A = 28; once we know the best setting there, try mfb's.
One change at a time with A/B comparisons give us "clear" evidence for what to use; once the big settings like A and lp are set, the little settings (mfb, starting Q, lambda, target rels) can be dialed in hopes of finding a few more % of speed. |
![]() |
![]() |
![]() |
#43 | |
"Ed Hall"
Dec 2009
Adirondack Mtns
2×2,609 Posts |
![]() Quote:
Again, I'm out of c164 candidates, I may have some lower c165s. |
|
![]() |
![]() |
![]() |
#44 |
Apr 2020
13·71 Posts |
![]() |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
CADO help | henryzz | CADO-NFS | 6 | 2022-09-13 23:11 |
CADO NFS | Shaopu Lin | CADO-NFS | 522 | 2021-05-04 18:28 |
CADO-NFS Timing Data For Many Factorizations | EdH | EdH | 8 | 2019-05-20 15:07 |
CADO-NFS | skan | Information & Answers | 1 | 2013-10-22 07:00 |
CADO | R.D. Silverman | Factoring | 4 | 2008-11-06 12:35 |