![]() |
tpie was part of a very old attempt to implement out-of-core NFS filtering, using an external parallel data structures library.
More to the point, the latest version of the GGNFS source has not needed this dependency for years. Do *not* use the snapshots on the sourceforge page, they are going on 4 years old. |
[quote=jasonp;225112]tpie was part of a very old attempt to implement out-of-core NFS filtering, using an external parallel data structures library.
More to the point, the latest version of the GGNFS source has not needed this dependency for years. Do *not* use the snapshots on the sourceforge page, they are going on 4 years old.[/quote] Thanks! That explains a few things. The current version compiled with no errors. Is there a way to get sourceforge to remove/update the outdated file(s)? |
a tiny bug in SVN-400 (windows binary):
when I type "gnfs-lasieve4 -v -M1 -a <inputfile> -o <outputfile> -f <start of range> -c <length of range>, the first line says: [COLOR="Blue"]gnfs-lasieve4I16e: L1_BITS=15, SVN $Revision$ [/COLOR] It seems that the routine which prints this infot to screen somehow doesn't catch the version number. |
We can call it a [I]feature[/I] of sourceforge.
By using a svn call, one gets this variable filled in by the server. That's as expected and documented in SVN docs. However, by getting a tar ball (which is a convenience function/shortcut of sourceforge), one gets plain guts of the database. That's how they implemented it. [SIZE=1]Corollary: it appears to indicate how Jeff retrieved the source.[/SIZE] |
Oops. My SVN client was broken at the time so I grabbed it by tarball. I guess when SVN401 is created I can fix it. :smile:
|
The Win-64 svn400 zip is missing factmsieve.py. I assume the one in the 32 bit zip file will work, but you may want to update the zip to make it consistent with the directions.
William |
[QUOTE=wblipp;254423]The Win-64 svn400 zip is missing factmsieve.py. I assume the one in the 32 bit zip file will work, but you may want to update the zip to make it consistent with the directions.
[/QUOTE] Thanks for pointing that out, I didn't mean to include Brian's script in the archive at all (unless it is now bundled in the GGNFS SVN tree). It might work but I would suggest using the latest version directly from him: [url]http://gladman.plushost.co.uk/oldsite/computing/factoring.php[/url] Jeff. |
So I just checked and factmsieve.py is not bundled with the archive, people should be getting the latest version from Brian Gladman's release site.
I just rebuild GGNFS with the latest SVN which has some fixes that Brian submitted. It also uses MPIR 2.3.0 and the latest SP1 from VS2010. You can get it in the usual spot: [url]http://gilchrist.ca/jeff/factoring/[/url] |
Now that factmsieve.py is more mature, would it make sense to add it to GGNFS?
|
After successfully factoring more than 180 numbers with GGNFS (for aliquot sequences, I just had a C110 where the poly selection got trapped in an infinite loop.
I don’t know if that is a bug or if just the candidate is too special. I did not read through the whole version history, so I don’t know if that problem should not happen anymore with the most recent version. [code]n: 47616645712407218217737127420038610090866473268816600003542966360937022661574893441593293819912946473666400641 [/code] [code]$ perl factMsieve.pl test181.n -> ___________________________________________________________ -> | This is the factMsieve.pl script for GGNFS. | -> | This program is copyright 2004, Chris Monico, and subject| -> | to the terms of the GNU General Public License version 2.| -> |__________________________________________________________| -> This is client 1 of 1 -> Using 1 threads -> Working with NAME=test181... -> Error: Polynomial file test181.poly does not exist! -> Found n=476166457124072182177371274200386100908664732688166000035429663609370 22661574893441593293819912946473666400641. -> Attempting to run polyselect... -> Selected default polsel parameters for 110 digit level. -> Selected default factorization parameters for 110 digit level. -> Selected lattice siever: ./gnfs-lasieve4I13e -> Searching leading coefficients from 1 to 1000. => "./pol51m0b" -b test181.polsel.intel34.5504 -v -v -p 4 -n 2.52E+016 -a 0 -A 1 > test181.polsel.intel34.5504.log => "./pol51opt" -b test181.polsel.intel34.5504 -v -v -n 9.98E+014 -N 9.16E+012 -e 7.83E-010 > test181.polsel.intel34.5504.log $ -> Best score so far: 0.000000e+00 (goodScore=7.830000e-10) -> ===================================================== -> Searching leading coefficients from 1001 to 2000. => "./pol51m0b" -b test181.polsel.intel34.5504 -v -v -p 4 -n 2.52E+016 -a 1 -A 2 > test181.polsel.intel34.5504.log => "./pol51opt" -b test181.polsel.intel34.5504 -v -v -n 9.98E+014 -N 9.16E+012 -e 7.83E-010 > test181.polsel.intel34.5504.log [/code] output stops here (due to loop) [code]---------------------------------------------------- | pol51opt GNFS polynomial selection program | | This program is copyright (c) 2005, by Thorsten | | Kleinjung and Jens Franke, and is subject to the | | terms of the GNU GPL version 2. | | This program is part of gnfs4linux. | ---------------------------------------------------- 1020 0 -214252706096084 10839305219668178720 1190443816100978084342 1609549358144032352811 10245606547 2156908517366755849504 area: [-1003520,1003519]x[-100,100] norm: -1.#IOe+000 alpha_proj: -1.239 skewness: 1.#IO limit: -1.#IND00, lim0: -1.#IND00 cut: 1 PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP[/code] It is just adding “P” forever [code]1020 10245606547 2156908517366755849504 1020 38038100243 2156908517410100137421 1020 19923081457 2156908517378324257712 1620 76761364933 1966295371258698882437 1620 117504136673 1966295371182252559987 1620 160993094317 1966295370955641336536 1620 141627323783 1966295371220072605225 1680 275264256839 1952045351965325263265 [/code] These lines repeating forever in the .cand file: [code]BEGIN POLY #skewness 1.#J norm -1.#Je+000 alpha 0.46 Murphy_E 1.#Re+000 X5 1020 X4 0 X3 -214252706096084 X2 10839304195107524020 X1 216881285271105480989302 X0 2164502444897244974126606891 Y1 10245606547 Y0 -2156908517366755849504 M 46293315052448610556527534032778512148895895636667467393481923248499478991328042289734371365773746202046264094 END POLY BEGIN POLY #skewness 1.#J norm -1.#Je+000 alpha -0.08 Murphy_E 1.#Re+000 X5 1020 X4 0 X3 -214252706096084 X2 10839304195107524020 X1 216881285271115726595849 X0 2164500287988727607370757387 Y1 10245606547 Y0 -2156908517366755849504 M 46293315052448610556527534032778512148895895636667467393481923248499478991328042289734371365773746202046264094 END POLY [/code] After some time I aborted the calculation. NO, you don’t need to factor that one anymore:: I started aliquiet again to factor it with Msieve, but then ECM found the overdue C36. (the first time I tried about 500 3M curves) [code] 898920: 600 [Apr 11 2011, 22:45:41] c110: running 904 ecm curves at B1=1e6... Using B1=1000000, B2=1045563762, polynomial Dickson(6), sigma=3870819699 Step 1 took 7953ms ********** Factor found in step 1: 522912492390210383355069473722603693 [Apr 11 2011, 23:35:35] *** prp36 = 522912492390210383355069473722603693 [Apr 11 2011, 23:35:35] Cofactor 91060447790706988841339564306090074789844674475089593547815888383509648037 (74 digits) [Apr 11 2011, 23:35:35] *** prp74 = 91060447790706988841339564306090074789844674475089593547815888383509648037 [/code] Some other thing I am wondering about (I must confess that I don’t know too much details about the algorithm): GGNFS does some lattice sieving to find some million relations. After that it tries to find out if there are enough relations and does relation filtering, duplicate removal and singleton removal. If there are not enough relations, it continues lattice sieving to get more relations. Etc. Now my question: Does it need to repeat the relation filtering, duplicate removal and singleton removal again and again 8between lattice sieving)? If GGNFS is lattice sieving enough relations already at the beginning would it then be faster or does it have to do the same amount of filtering and singleton removal at the end? (I guess the latter one, but I am not sure). When changing the configuration of the perl file to run GGNFS on more than one core, only the lattice sieving part has more threads. While poly search, relation filtering, singleton removal and post processing only one thread is running. Is it here possible to do automatically split up poly search intro more threads? Is it possible that the remaining threads are continuing lattice sieving while one thread is doing relation filtering, duplicate removal and singleton removal to find out if there are enough relations? Maybe that part is unwieldy to implement. |
Yes, the filtering must start over at the beginning each time it is run. If the filtering is always running before enough relations are found, you may want to edit the factmsieve script so that it requires more relations before it attempts to filter. Though this may also mean that you'll end up spending more time sieving when you don't need to, so do some tests.
Not sure why you didn't get an msieve polynomial for your number. I just ran a quick polynomial selection for it (both with and without CUDA) and got lots of good polynomials after a few minutes, while the time limit is an hour, so you should have gotten some. The pol51opt poly you posted is really strange, and is probably a bug. |
| All times are UTC. The time now is 22:49. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.