mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   CADO-NFS (https://www.mersenneforum.org/forumdisplay.php?f=170)
-   -   CADO NFS (https://www.mersenneforum.org/showthread.php?t=11948)

Shaopu Lin 2009-05-26 01:09

CADO NFS
 
The cado nfs suite is now available from [url]http://cado.gforge.inria.fr[/url].

Jeff Gilchrist 2009-05-26 17:56

[QUOTE=Shaopu Lin;174811]The cado nfs suite is now available from [url]http://cado.gforge.inria.fr[/url].[/QUOTE]

Anyone have success building this? I keep getting errors during the build process. Seems quite complex with pthreads and MPI versions available.

CRGreathouse 2009-05-26 18:52

Heh, I can't even set up my networking for it properly, let alone build it.

10metreh 2009-05-26 19:13

How fast is it actually meant to be?

KriZp 2009-05-26 20:24

It built fine for me by just typing "make" (it downloaded CMAKE and built it first), and after I got the ssh-agent working it ran fine on localhost, factoring the c59 example provided. I have been unable to figure out how to make use of remote hosts, the syntax of the mach_desc file is not explained anywhere.

KriZp 2009-05-26 23:39

It was simply a matter of putting the executables on the remote host and editing the machine description part of the run_example.sh script to include the lines
[code][remote]
tmpdir=$t/tmp
cadodir=/path/to/build/directory/
remote_host_name cores=1[/code]

It then used 1 remote and 1 local core for polyselect, 2 local and 1 remote core for the sieving, and 1 local for the rest.

frmky 2009-05-27 00:04

CADO NFS
 
Moving the discussion of CADO NFS out of the Links thread...

I've downloaded the source, compiled it using pthreads, and have successfully ran a GNFS factorization using the included perl script. I have also noticed that the poly file format and relation format matches that of GGNFS. (Thanks for that!) I have not yet figured out how (1) given a polynomial file, do a complete SNFS run, and (2) given a polynomial file and set of relations that possibly includes duplicates and bad relations, do all post-processing steps. Any guidance?

Once I know how to do (2), I will determine how well bwc runs on our workstation with up to 32 threads, and on our beowulf cluster of 10x4 cores.

akruppa 2009-05-27 00:26

[QUOTE=10metreh;174902]How fast is it actually meant to be?[/QUOTE]

The siever can't compete with Franke/Kleinjung's siever yet. It's slower and uses much more memory. The core sieving routines need a complete overhaul. Embarrassingly, it sieves special-q only on the algebraic side so far.

[QUOTE=frmky;174928]I have not yet figured out how (1) given a polynomial file, do a complete SNFS run, and (2) given a polynomial file and set of relations that possibly includes duplicates and bad relations, do all post-processing steps. Any guidance?
[/QUOTE]

The perl script keeps track of which tasks are already done by <prefix>.<task>_done files, so you can write your own poly file and "touch <prefix>.polysel_done" (e.g., "touch 797161_29.polysel_done"). The perl script should generate the factor base and start sieving. If you already have relations, you should be able to copy your own files (matching the naming scheme of the perl script, e.g., "797161_29.rels.9000000-9100000") and simply run the perl script again. It should check the relation files, count how many relations there are, start sievers, and if there are enough relations, try a filtering run. Warning: a file that contains bad relations is deleted. In fact, the script is a bit over-eager "cleaning up" sometimes, [B]keep backups[/B]!

More tomorrow,

Alex

R.D. Silverman 2009-05-27 11:04

[QUOTE=akruppa;174930]The siever can't compete with Franke/Kleinjung's siever yet. It's slower and uses much more memory. The core sieving routines need a complete overhaul. Embarrassingly, it sieves special-q only on the algebraic side so far.



The perl script keeps track of which tasks are already done by <prefix>.<task>_done files, so you can write your own poly file and "touch <prefix>.polysel_done" (e.g., "touch 797161_29.polysel_done"). The perl script should generate the factor base and start sieving. If you already have relations, you should be able to copy your own files (matching the naming scheme of the perl script, e.g., "797161_29.rels.9000000-9100000") and simply run the perl script again. It should check the relation files, count how many relations there are, start sievers, and if there are enough relations, try a filtering run. Warning: a file that contains bad relations is deleted. In fact, the script is a bit over-eager "cleaning up" sometimes, [B]keep backups[/B]!

More tomorrow,

Alex[/QUOTE]

Is this a posix archive? The version of tar that I have will not read --posix
archives.

Indeed, after I gunzipped the file, neither tar -x not tar -t works on the
file; tar just sits there and does nothing.

akruppa 2009-05-27 11:29

I think it's a GNU tar archive... what version of tar are you using? Is a GNU version of tar installed somewhere, maybe named "gtar" ?

Alex

R.D. Silverman 2009-05-27 11:42

[QUOTE=akruppa;174970]I think it's a GNU tar archive... what version of tar are you using? Is a GNU version of tar installed somewhere, maybe named "gtar" ?

Alex[/QUOTE]


It is GNU tar 1.12


All times are UTC. The time now is 01:35.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.