mersenneforum.org ggnfs, msieve, and factmsieve.py
 Register FAQ Search Today's Posts Mark Forums Read

 2018-11-23, 18:04 #1 md12345   Nov 2018 11 Posts ggnfs, msieve, and factmsieve.py I am trying to factor some RSA 512 and have seen that the preferred approach is using ggnfs and msieve. I found that factmsieve is a driver for both these but have been having some issues getting the ggnfs to build. The msieve built properly but I have tried building ggnfs from both the github and the sourceforge versions but both of them have issues. The github version had problems building, which is discussed in this forum post: https://www.mersenneforum.org/showthread.php?t=20468 but I couldn't really find a solution. One user referred the OP of that post to some precompiled binaries but the link is dead. The sourceforge version gives me the following error "/usr/bin/ld: cannot find -ltpie" when I run "make x86_64". My cpu is a i5-8250u so this seemed like the most reasonable target to choose. I guess I am just wondering if there are more recent instructions to set up everything needed for factmsieve.py?
 2018-11-23, 22:42 #2 VBCurtis     "Curtis" Feb 2005 Riverside, CA 3·52·73 Posts If you're willing to do a little bit of testing for me, I believe the CADO package is either faster now or can be made faster after a few factorizations for GNFS at 155-digit size. It only compiles on linux, and can be found at http://cado-nfs.gforge.inria.fr/ If you run Windows, CADO is not worth pursuing; the first place I checked for Windows binaries timed out (jeff gilchrist's pages), but someone surely will supply you a link soon. If you do run linux, and are willing to give CADO a try, I have a much-improved parameters file for 512-bit size numbers that I can post; if you have a few to try I'd like to supply you a series of files so we can try to refine parameters and squeeze out another 5-10% of speed. CADO runs a server-client model, and it's pretty easy to connect clients from other machines for the sieve step (which is 80% or so of the total job length). You're looking at something near 140 ghz-days (in my case, 7 days on 6 cores of i7-3.3ghz) per factorization on factmsieve.py, with CADO somewhere like 0-10% faster at present.
 2018-11-23, 23:34 #3 md12345   Nov 2018 11 Posts Sure I will check it out. I am doing this for a class project, so I currently plan on running it on a 6 core 3.6 ghz processor as well as a supercompuer cluster. I don't know the clockspeed on the cluster, but it has about 64 cores so my main goal is just to get some testing on that in the next couple days. I will look at the link you sent me and try to get it setup. I'll let you know when I got everything setup.
 2018-11-24, 09:12 #4 VictordeHolland     "Victor de Hollander" Aug 2011 the Netherlands 49B16 Posts CADO-NFS is the way to go on Linux. It should compile with just: Code: make The install script should ask to install the package CMAKE, if you don't have that package installed already. If you're running Debian/Ubuntu, the user "EdH" has instructions on how to install various factoring programs (including YAFU, msieve, GMP-ECM, GGNFS sievers): https://mersenneforum.org/forumdisplay.php?f=152
 2018-11-24, 22:27 #5 jasonp Tribal Bullet     Oct 2004 67338 Posts In GGNFS the main makefile attempts to build all the binaries in the package, of which only the sieve binary is not totally obsolete. You can try bypassing all of them by building src/lasieve4/Makefile directly, with Code: cd src/lasieve4 ln -s piii asm make Last fiddled with by jasonp on 2018-11-24 at 22:36
2018-11-26, 03:16   #6
md12345

Nov 2018

11 Posts

Quote:
 Originally Posted by VBCurtis If you're willing to do a little bit of testing for me, I believe the CADO package is either faster now or can be made faster after a few factorizations for GNFS at 155-digit size. It only compiles on linux, and can be found at http://cado-nfs.gforge.inria.fr/ If you run Windows, CADO is not worth pursuing; the first place I checked for Windows binaries timed out (jeff gilchrist's pages), but someone surely will supply you a link soon. If you do run linux, and are willing to give CADO a try, I have a much-improved parameters file for 512-bit size numbers that I can post; if you have a few to try I'd like to supply you a series of files so we can try to refine parameters and squeeze out another 5-10% of speed. CADO runs a server-client model, and it's pretty easy to connect clients from other machines for the sieve step (which is 80% or so of the total job length). You're looking at something near 140 ghz-days (in my case, 7 days on 6 cores of i7-3.3ghz) per factorization on factmsieve.py, with CADO somewhere like 0-10% faster at present.
I have it up in running if you have the parameter files that you were referring to.

 2018-11-26, 04:02 #7 VBCurtis     "Curtis" Feb 2005 Riverside, CA 125438 Posts Here's the param file I used on a c155 last month: Code: ########################################################################### # Parameter file for Cado-NFS ########################################################################### # See params/params.c90 for an example which contains some documentation. ########################################################################### # General parameters ########################################################################### name = numbername.c155 N = {paste your number here} slaves.hostnames = localhost slaves.nrclients = {half the number of threads on your machine; use 8 for a hyperthreaded quad-core} tasks.threads = {twice the number of nrclients above} ########################################################################### # Polynomial selection ########################################################################### tasks.polyselect.degree = 5 tasks.polyselect.P = 500000 tasks.polyselect.admin = 6300 tasks.polyselect.admax = 40e4 tasks.polyselect.adrange = 840 tasks.polyselect.incr = 210 tasks.polyselect.nq = 15625 tasks.polyselect.nrkeep = 100 tasks.polyselect.ropteffort = 16 ########################################################################### # Sieve ########################################################################### lim0 = 18000000 lim1 = 30000000 lpb0 = 30 lpb1 = 31 tasks.sieve.mfb0 = 60 tasks.sieve.mfb1 = 61 tasks.sieve.ncurves0 = 19 tasks.sieve.ncurves1 = 22 tasks.I = 14 tasks.sieve.qrange = 5000 tasks.sieve.qmin = 5000000 tasks.sieve.rels_wanted = 130000000 ########################################################################### # Filtering ########################################################################### tasks.filter.purge.keep = 175 tasks.filter.maxlevel = 28 tasks.filter.target_density = 165.0 ########################################################################### # Linear algebra ########################################################################### tasks.linalg.bwc.interval = 1000 tasks.linalg.bwc.interleaving = 0 tasks.linalg.m = 64 tasks.linalg.n = 64 ########################################################################### # Characters ########################################################################### tasks.linalg.characters.nchar = 50 A few numbers need to be filled in for your particular hardware and input number. If you want to allow other machines to connect, add these in the "general parameters" list near the top: Code: server.whitelist = 169.254.0.0/16 server.ssl = no server.port = 44433 Server port can be any 5-digit number, as far as I know. I chose one easy to type on the command line. My home LAN is 169.254.0.1; the /16 means "accept any last 16 bits of IP address", which in my case allows any machine on 169.254.x.y to connect. Client invocation to connect to server: Code: ./cado-nfs-client.py --server=http://cadomachinename.local:44433 Good luck! If it works, and you're interested in helping me refine parameters, I'll ask you to report some stats from the report that is printed to screen at the end of the factorization. If you pause and need to restart, I can provide the command to do so. Good luck! To
 2018-11-26, 04:09 #8 VBCurtis     "Curtis" Feb 2005 Riverside, CA 3×52×73 Posts Stats I gathered from my c155 run: Hardware was a dual-Xeon HP Z620 with 2x10-core CPUs running 30 threads with no other tasks running. I should have run 40, but I thought I'd be running another unrelated process. Q sieved: 5M to 29.57M (this is the only one not reported at the end; I happened to note the last Q sieved while it was running. Not an important stat to take, but gives you an idea of how long it will run) Polynomial E-value from poly select: 9.15e-11 Sieve time: 3.09M thread-seconds Matrix-solve time: 1.16M thread-seconds (runs only on host CADO machine, remote connections not usable) Poly select time: 106K thread-seconds So, when poly select completes, you'll be roughly 1/40th done with the job; when filtering completes successfully you'll be about 75% done. A little more poly select time might result in a faster overall job; you can change tasks.polyselect.P to 600000 to spend about 15% more time in poly select (the hope is that you'd save 1-2% of sieve time, as 1% of 3M is more savings than 15% of 100k costs). Last fiddled with by VBCurtis on 2018-11-26 at 04:10
 2018-11-26, 11:45 #9 VictordeHolland     "Victor de Hollander" Aug 2011 the Netherlands 32×131 Posts VBCurtis, Any idea how long a 145-150 digit GNFS would take on a 16C/32T Xeon (Dual E5-2650)? Cause the machine is almost ready with it's current task (a few DC LL tests). I could run a few tasks after that :).
 2018-11-26, 17:14 #10 VBCurtis     "Curtis" Feb 2005 Riverside, CA 125438 Posts I've run two C145s; one took 1.4M thread-seconds, the other 1.05M. The main difference is that I used I=13 on the former, I=14 on the latter (equivalent of 13e vs 14e siever). A C147 took 1.68M thread-seconds. I can post suggested params files for c145 and c150 this evening.
 2019-03-14, 12:57 #11 nkyaadog   Mar 2019 18 Posts VBCurtis, What do you think of "Factoring As A Service" project in general, and their optimized set of parameters in particular (taken from: github.com/eniac/faas/blob/master/ec2/vars/factor.yml) cado: name: faas155 N: '{{ custom.N }}' alim: 15246811 rlim: 31940624 lpbr: 28 lpba: 28 tasks: mpi: "8x8" polyselect: threads: 2 degree: 5 P: 500000 admax: 2e7 incr: 60 nq: 1000 sieve: threads: 2 mfbr: 62 mfba: 61 rlambda: 2.24 alambda: 2.20 ncurves0: 23 ncurves1: 15 qrange: 2500 I: 14 msieve: target_density: 70 filter: purge: keep: 160 maxlevel: 25 ratio: 1.1 merge: forbw: 3

 Similar Threads Thread Thread Starter Forum Replies Last Post FelicityGranger Msieve 3 2022-08-21 20:44 EdH EdH 9 2022-01-07 16:31 xilman Msieve 8 2017-05-20 00:12 Romuald Msieve 24 2015-11-09 20:16 D. B. Staple Factoring 6 2011-06-12 22:23

All times are UTC. The time now is 20:08.

Thu Sep 29 20:08:24 UTC 2022 up 42 days, 17:36, 0 users, load averages: 1.69, 1.45, 1.34