mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > CADO-NFS

Reply
 
Thread Tools
Old 2019-01-04, 12:12   #320
Gimarel
 
Apr 2010

22·37 Posts
Default C105 Parameter

Better sieving parameters for C105. I didn't change the poly selection part.

In my experience tasks.lim1 should be equal to the top end of the (expected) sieving range.
If you want to oversieve a bit to reduce the matrix size, it's better to use the parameter tasks.filter.required_excess than to specify tasks.sieve.rels_wanted.

In my example the lambda0 and lambda1 paramters are essential because otherwise the siever produces to much useless relations. These are not optimized but should be about right. The parameter tasks.sieve.rels_wanted is needed in my example, because cado overestimates the neede relations by a factor 2.

The parameterstasks.lim0 and tasks.qmin are not optimised.

With these parameters I get a sieving speedup of about 10-15% and a smaller matrix.
Attached Files
File Type: txt params.c105.txt (2.1 KB, 47 views)
Gimarel is offline   Reply With Quote
Old 2019-01-04, 19:07   #321
VictordeHolland
 
VictordeHolland's Avatar
 
"Victor de Hollander"
Aug 2011
the Netherlands

23·3·72 Posts
Thumbs up

Quote:
Originally Posted by Gimarel View Post
Better sieving parameters for C105. I didn't change the poly selection part.

In my experience tasks.lim1 should be equal to the top end of the (expected) sieving range.
If you want to oversieve a bit to reduce the matrix size, it's better to use the parameter tasks.filter.required_excess than to specify tasks.sieve.rels_wanted.

In my example the lambda0 and lambda1 paramters are essential because otherwise the siever produces to much useless relations. These are not optimized but should be about right. The parameter tasks.sieve.rels_wanted is needed in my example, because cado overestimates the needed relations by a factor 2.

The parameterstasks.lim0 and tasks.qmin are not optimised.

With these parameters I get a sieving speedup of about 10-15% and a smaller matrix.
We want more parameter files from you!

c105
Code:
107713203868901378890486921109668147250599518916591688453404410233186403423078799985908643904700547429021
Code:
Info:Square Root: Factors: 464884127923667954781021992089902060842382186081760172952730363 231699035090712841274341777335868170736967
Info:Polynomial Selection (size optimized): Aggregate statistics:
Info:Polynomial Selection (size optimized): potential collisions: 4473.5
Info:Polynomial Selection (size optimized): raw lognorm (nr/min/av/max/std): 4342/28.940/37.049/44.340/1.438
Info:Polynomial Selection (size optimized): optimized lognorm (nr/min/av/max/std): 4342/28.940/32.749/37.280/1.096
Info:Polynomial Selection (size optimized): Total time: 366.14
Info:Polynomial Selection (root optimized): Aggregate statistics:
Info:Polynomial Selection (root optimized): Total time: 195.9
Info:Polynomial Selection (root optimized): Rootsieve time: 195.31
Info:Generate Factor Base: Total cpu/real time for makefb: 2.06/0.151476
Info:Generate Free Relations: Total cpu/real time for freerel: 215.04/6.83992
Info:Lattice Sieving: Aggregate statistics:
Info:Lattice Sieving: Total number of relations: 6319648
Info:Lattice Sieving: Average J: 1918.22 for 63809 special-q, max bucket fill -bkmult 1.0,1s:1.301890
Info:Lattice Sieving: Total time: 8788.08s
Info:Filtering - Duplicate Removal, splitting pass: Total cpu/real time for dup1: 21.97/25.6841
Info:Filtering - Duplicate Removal, splitting pass: Aggregate statistics:
Info:Filtering - Duplicate Removal, splitting pass: CPU time for dup1: 25.5s
Info:Filtering - Duplicate Removal, removal pass: Total cpu/real time for dup2: 111.77/40.0544
Info:Filtering - Duplicate Removal, removal pass: Aggregate statistics:
Info:Filtering - Duplicate Removal, removal pass: CPU time for dup2: 30.6s
Info:Filtering - Singleton removal: Total cpu/real time for purge: 57.94/24.8971
Info:Filtering - Merging: Total cpu/real time for merge: 45.12/29.8022
Info:Filtering - Merging: Total cpu/real time for replay: 7.98/6.32849
Info:Linear Algebra: Total cpu/real time for bwc: 534.81/42.85
Info:Linear Algebra: Aggregate statistics:
Info:Linear Algebra: Krylov: WCT time 18.05, iteration CPU time 0, COMM 0.0, cpu-wait 0.0, comm-wait 0.0 (5000 iterations)
Info:Linear Algebra: Lingen CPU time 57.86, WCT time 5.44
Info:Linear Algebra: Mksol: WCT time 12.33, iteration CPU time 0, COMM 0.0, cpu-wait 0.0, comm-wait 0.0 (3000 iterations)
Info:Quadratic Characters: Total cpu/real time for characters: 13.61/2.23025
Info:Square Root: Total cpu/real time for sqrt: 150.89/22.1238
Info:HTTP server: Shutting down HTTP server
Info:Complete Factorization: Total cpu/elapsed time for entire factorization: 25453.4/787.263
25453 CPUsec / 787 WCT
VictordeHolland is offline   Reply With Quote
Old 2019-01-05, 21:43   #322
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

106238 Posts
Default

Quote:
Originally Posted by Gimarel View Post
Better sieving parameters for C105. I didn't change the poly selection part.

In my experience tasks.lim1 should be equal to the top end of the (expected) sieving range.
If you want to oversieve a bit to reduce the matrix size, it's better to use the parameter tasks.filter.required_excess than to specify tasks.sieve.rels_wanted.

In my example the lambda0 and lambda1 paramters are essential because otherwise the siever produces to much useless relations. These are not optimized but should be about right. The parameter tasks.sieve.rels_wanted is needed in my example, because cado overestimates the neede relations by a factor 2.

The parameterstasks.lim0 and tasks.qmin are not optimised.

With these parameters I get a sieving speedup of about 10-15% and a smaller matrix.
Nice! I tested on a C103, and got a faster time than my previous best at C102.
I used my own poly-select parameters (posted on my C105 file).
If I understand Lambda correctly, 1.775 * 27 = 48, so you're using mfb0 and mfb1 48 for a 27LP job. Interesting! I confirmed this by setting those to 48 rather than 54, with almost no change in sieve time.

I then changed qmin to 60k and rels_wanted to 6.5M (to correct for the massive duplicate-relations produced at small Q). This job filtered twice, as did Gimarel's parameters; sieve time, CPU time, and WCT were all 8+% better than Gimarel's settings. I run 30-threaded on a Xeon, so quite a lot of relations are found during the first filtering pass; other testers may find different timings using fewer threads.
My results:
Gimarel's params: sieve time 4890, CPU time 17700, WCT 541.
Setting Q=60k: sieve time 4220, CPU time 16460, WCT 502

I did many other tests, such as lambda = 1.75 or 1.8, Q-min 30k, 40k, 80k; none faster than 502 WCT but lots around 520.

It is clear that CPU time has some calculation flaw, as WCT * threads > CPU time. The machine is a dual 10-core, running 10-threaded msieve LA; I use 30 threads for tasks, 20 threads for server tasks.

I'll next try different lim's and LA settings.
VBCurtis is offline   Reply With Quote
Old 2019-01-16, 23:59   #323
Nooks
 
Jul 2018

19 Posts
Default

Curtis:

Regarding the c172 from last week: if I run more GNFS (though I don't plan to right this moment) I will commit to not fiddling with parameters during its run as I did here: particularly I changed the target matrix density to 110 from 170 (I was getting a lot of duplicates and not many relations per workunit) and I tried adjusting qmin down (from 19600000 to 9800000) towards the end, though I don't think that had any effect at all---it certainly did not start sieving below what it had already done.

Lattice Sieving: Total time: 1.49137e+07s
Linear Algebra: Total cpu/real time for bwc: 4.80653e+06/653502
Complete Factorization: Total cpu/elapsed time for entire factorization: 2.65919e+07/1.17675e+06
Filtering - Merging: Merged matrix has 15059957 rows and total weight 1656595352 (110.0 entries per row on average)

Like before, the timing information is a bit of a mess.

To extract timing information from the log:

$ egrep -i 'total.+time' 37771_279.log | grep -viw debug | sed -e's/^.\+Info://' | uniq -c | less

A bit of a hack but it lets me see that the sieving time only increases as I stop and start the process. Apologies if this is less than useful. I need to be a little more methodical about how I approach this.
Nooks is offline   Reply With Quote
Old 2019-01-17, 01:02   #324
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

11·409 Posts
Default

Thanks for the data! Also thanks for the unix protip to extract the info.
Your changes don't distort the data much, if at all; changing q-min after the run begins won't alter sieve behavior, as you discovered. Changing matrix density alters post-processing, but does nothing to the sieve process.
VBCurtis is offline   Reply With Quote
Old 2019-03-27, 19:20   #325
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

22·5·173 Posts
Default Ubuntu 18.04 and CADO-NFS

I've upgraded some of my machines that were running CADO-NFS from Ubuntu 16.04 to Ubuntu 18.04. Now they won't run the previous CADO-NFS and trying to make from scratch also fails. Any thoughts? Is there something simple I'm missing?

Code:
[ 44%] Building C object sieve/strategies/CMakeFiles/benchfm.dir/utils_st/tab_strategy.c.o
Linking CXX executable benchfm
/usr/bin/ld: CMakeFiles/benchfm.dir/utils_st/tab_point.c.o: relocation R_X86_64_32 against `.rodata' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: error: ld returned 1 exit status
sieve/strategies/CMakeFiles/benchfm.dir/build.make:287: recipe for target 'sieve/strategies/benchfm' failed
make[2]: *** [sieve/strategies/benchfm] Error 1
CMakeFiles/Makefile2:1454: recipe for target 'sieve/strategies/CMakeFiles/benchfm.dir/all' failed
make[1]: *** [sieve/strategies/CMakeFiles/benchfm.dir/all] Error 2
Makefile:123: recipe for target 'all' failed
make: *** [all] Error 2
Makefile:7: recipe for target 'all' failed
make: *** [all] Error 2
I have recompiled GMP and GMP-ECM with no issues and YAFU still runs OK, but I haven't tried recompiling YAFU or any of my other packages, yet.
EdH is offline   Reply With Quote
Old 2019-03-27, 20:44   #326
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

D8416 Posts
Default Previous post update

Apparently, the upgrade to 18.04 on my machines resulted in Python being removed. Installing both Python and Python3 has seemed to fix the issue.
EdH is offline   Reply With Quote
Old 2019-03-28, 01:52   #327
swellman
 
swellman's Avatar
 
Jun 2012

24·181 Posts
Default

Quote:
Originally Posted by EdH View Post
Apparently, the upgrade to 18.04 on my machines resulted in Python being removed. Installing both Python and Python3 has seemed to fix the issue.
Had the same experience a few weeks ago with 18.04. CADO would just crash despite the fact I had installed Python3. It all worked after I installed the missing Python package.

EdH - can you add this tip to your excellent install guide for CADO?
swellman is online now   Reply With Quote
Old 2019-03-28, 02:34   #328
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

22·5·173 Posts
Default

Quote:
Originally Posted by swellman View Post
Had the same experience a few weeks ago with 18.04. CADO would just crash despite the fact I had installed Python3. It all worked after I installed the missing Python package.

EdH - can you add this tip to your excellent install guide for CADO?
Thanks for the confirmation. I do see that I only need to reinstall Python, rather than Python3.

I will add a note in a day or so. I didn't want to edit anything while the board was acting up.
EdH is offline   Reply With Quote
Old 2019-03-31, 08:36   #329
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

11×409 Posts
Default

I have one machine running 18.04, and a few running 16.04. The one running 18.04 had a CADO install that won't play well with the installs on the other machines; when trying a distributed GNFS job, the clients on 16.04 (even with the newest CADO git) note that /download/las is different than the one on the server, download las from the server, and it crashes upon invocation because 18.04 has a newer GCC so the lib's don't match.
So, until/unless I upgrade my 16.04 machines and rebuild CADO, I can't run a server/client setup for a big job. Annoying, and I wish CADO wouldn't check the client las versus the server las.
Unfortunately, the one running 18.04 is the only machine that all my others can connect to, sigh.
VBCurtis is offline   Reply With Quote
Old 2019-03-31, 13:53   #330
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

1101100001002 Posts
Default

I'm not sure if this helps, but I had a similar issue due to various hardware and have to use --binddir=build/<username>/ on my clients. My current server is 16.04 and some clients are 18.04.
EdH is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
CADO-NFS on windows jux CADO-NFS 22 2019-11-12 12:08
CADO help henryzz CADO-NFS 4 2017-11-20 15:14
CADO and WinBlows akruppa Programming 22 2015-12-31 08:37
CADO-NFS skan Information & Answers 1 2013-10-22 07:00
CADO R.D. Silverman Factoring 4 2008-11-06 12:35

All times are UTC. The time now is 12:20.

Wed Dec 2 12:20:14 UTC 2020 up 83 days, 9:31, 1 user, load averages: 4.63, 4.50, 4.45

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.