![]() |
I could use some help working with replay.
From README.msieve: [code] - use CADO-NFS for the filtering. In what follows, let 'prefix' be the prefix used for all the CADO filenames - use the CADO 'replay' binary with --for_msieve to produce a file <prefix>.cyc [/code]replay usage: [code] $ ../build/math79/filter/replay Usage: ../build/math79/filter/replay <parameters> The available parameters are the following: -purged input purged file -his input history file -out basename for output matrices -skip number of heaviest columns that go to the dense matrix (default 32) -index file containing description of rows (relations-sets) of the matrix -ideals file containing correspondence between ideals and matrix columns -force-posix-threads (switch) -path_antebuffer path to antebuffer program -for_msieve output matrix in msieve format -Nmax stop at Nmax number of rows (default 0) -verbose_flags fine grained control on which messages get printed [/code]replay attempt: [code] $ ../build/math79/filter/replay -for_msieve -purged c100.purged.gz -out c100.cyc -his c100.history.gz # (2acb184f4) ../build/math79/filter/replay -for_msieve -purged c100.purged.gz -out c100.cyc -his c100.history.gz # List of modified files in working directory and their SHA1 sum: # (tarball extracted) # Compiled with gcc 5.4.0 20160609 # Compilation flags -std=c99 -g -W -Wall -O2 -msse3 -mssse3 -msse4.1 -mavx -mpclmul # Output matrices will be written in text format antebuffer set to /home/math79/Math/cado-nfs/build/math79/utils/antebuffer Sparse matrix has 676479 rows and 3141033 cols The biggest index appearing in a relation is 3141033 Reading row additions # Read 1024 row additions in 0.0s -- 96668.2 line/s # Read 2048 row additions in 0.0s -- 163645.9 line/s # Read 4096 row additions in 0.0s -- 275344.9 line/s # Read 8192 row additions in 0.0s -- 453749.6 line/s # Read 16384 row additions in 0.0s -- 650469.3 line/s # Read 32768 row additions in 0.0s -- 869410.1 line/s # Read 65536 row additions in 0.1s -- 1031672.9 line/s # Read 131072 row additions in 0.1s -- 1149342.5 line/s # Read 262144 row additions in 0.2s -- 1236750.1 line/s # Read 524288 row additions in 0.4s -- 1166020.3 line/s # Read 1048576 row additions in 0.8s -- 1361737.7 line/s # Done: Read 1804303 row additions in 1.4s -- 1262598.0 line/s Error, skip should be 0 with --for_msieve [/code]Do I need to provide some other parameters, or something else? |
Darn the "edit window"...:sad:
[strike]Never-mind. I think I have it working, at least this far...[/strike] I couldn't find a purge.log file, but I tried to continue without it. [code] $ ../../msieve/msieve -v -nf c100.fb -s c100 -nc2 "cado_filter=1" Msieve v. 1.54 (SVN 1015) Fri May 11 00:03:19 2018 random seeds: 5d49b7ee 1fc2f017 factoring 12957015248209241740975112870606500218152612197581145459935694931757631591512221494932593267684526589 (101 digits) searching for 15-digit factors commencing number field sieve (101-digit input) R0: -2155705088808880551219399 R1: 32505869823781 A0: 16513438386348717729880931554 A1: -12209262827745380195609 A2: -10321887470935094 A3: -335931020 A4: 600 skew 1.00, size 5.291e-14, alpha -3.954, combined = 4.281e-11 rroots = 2 commencing linear algebra assuming CADO-NFS filtering read 171923 cycles cycles contain 0 unique purge entries error: cannot locate relation 16 [/code] |
[QUOTE=RichD;487378]I assume that was a joke. :smile:[/QUOTE]
Yessir. :) I restarted the job about 36 hr after it began, and the ETA is over 24 hr sooner. A slight parameter oversight, that! ~60 hr savings for a now 4 day job. |
@EdH I think that the CADO format changed and msieve hasn't been updated. You should be able to pass the raw relations to msieve filtering though.
|
[QUOTE=henryzz;487406]@EdH I think that the CADO format changed and msieve hasn't been updated. You should be able to pass the raw relations to msieve filtering though.[/QUOTE]
Is there an easier way to retrieve the relations than to manually extract them from 147 archived files? |
[QUOTE=EdH;487422]Is there an easier way to retrieve the relations than to manually extract them from 147 archived files?[/QUOTE]
Due to how gzip works you should be able to concatenate all the files and then extract. You then might want to remove all the lines beginning with # although I think msieve would probably cope. |
[QUOTE=henryzz;487437]Due to how gzip works you should be able to concatenate all the files and then extract. You then might want to remove all the lines beginning with # although I think msieve would probably cope.[/QUOTE]
Worked great - thanks! All I needed to do was: [code] cat *.gz > rels.dat.gz [/code]convert c100.poly to c100.fb and run msieve: [code] $ ../msieve -i c100.n -s rels.dat.gz -l c100msieve.log -nf c100.fb -t 8 -nc1 $ ../msieve -i c100.n -s rels.dat.gz -l c100msieve.log -nf c100.fb -t 8 -nc2 linear algebra completed 212499 of 213442 dimensions (99.6%, ETA 0h 0m) $ ../msieve -i c100.n -s rels.dat.gz -l c100msieve.log -nf c100.fb -t 8 -nc3 [/code]And, I only separated the three steps to see how they performed. I'm sure: [code] $ ../msieve -i c100.n -s rels.dat.gz -l c100msieve.log -nf c100.fb -t 8 -nc [/code]would have run as well. Now to work on scripting it all... |
Well, I'm still not enlightened enough, yet!:sad:
How should I proceed if msieve responds with: [code] filtering wants 1000000 more relations [/code]Is there a cado-nfs command to seive more, or should I hit the ggnfs sievers a bit? Is there a setting in params.cxxx that I should change to give cado-nfs a better chance at providing enough relations from its first run? |
Yes!
I don't recall the command offhand, but it's listed in one of the readme files included in the main CADO folder. There's a param setting to set minimum relations, and a different setting to tell CADO what % more relations to produce if filtering fails (default is 1%). |
Took me a while to find, as it's not the readme in the main CADO folder. In /scripts/cadofactor, there's a readme, with this info:
"Note: if your factorization already started the linear algebra step, and you want to do more sieving, you can restart it with a larger "rels_wanted" than the current number of relations. For example if you have say 1000000 relations (grep "is now" in the log file), just add in the cado-nfs.py line: tasks.sieve.rels_wanted=1000001 If not enough relations are collected after the filtering step, the sieve is executed again with an additional number of wanted relations. The parameter tasks.filter.add_ratio controls the number of additional required relations as a ratio of the number of unique relations already collected: tasks.filter.add_ratio=0.05 specifies that you want 5% more relations before next filtering step. The default value is 0.01 (i.e. 1% more relations)." |
Ran a c140 on cado, just to get familiar. The number was an snfs, with a natural octic. Cado rejects octics, so i downconverted to a septic which looked decent enough, but has a yield which is way below the one from polyselect. Reading the papers from Murphy etal, did not produce any actionable ideas. So i ran with GNFS, it finished successfully. I note:
1) the software overall is very high quality 2) Its not all that hard to split the range and run two or more different servers on different sites 3) the crypto certificate verification fails under various scenarios A) tunneling through port redirect behind a firewall, B) client has the clock set to a wrong date In this light, if someone sets up a public Cado server (would be nice) it is maybe ok to turn off ssl but then the downloading of executables should be turned off, i think there is such a mode. |
| All times are UTC. The time now is 22:13. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.