![]() |
|
|
#584 |
|
Sep 2004
2×5×283 Posts |
Attached an excel-table (updated today) which gives estimations for SNFS and GNFS difficulty from 86 up to 295, based on factorizations I have done and factorizations that have been done by others (individuals, projects like NFS@Home, Aliquot, RSALS, Cunningham).
|
|
|
|
|
|
#585 | |
|
Nov 2003
22·5·373 Posts |
Quote:
Instead of "raw relations", an appropriate measure would be sieve_area * #special_q needed * loglog pmax where pmax is the largest prime in the factor base. This expression represents the amount of sieving needed. |
|
|
|
|
|
|
#586 | |
|
Sep 2004
2·5·283 Posts |
Quote:
Last fiddled with by em99010pepe on 2011-05-16 at 21:54 |
|
|
|
|
|
|
#587 | |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
2×5,393 Posts |
Quote:
The sieve area is a simple function of x in the name gnfs-lasieve4Ixe --- look in the source code for the function. I could tell you what it is but it's educational for you to find out for yourself. The special-q is recorded in msieve's log files to a granularity of the sieving job sizes. That's easily precise enough for your purposes. The maximum prime in the factorbase should appear in the final ouptut file such as this one from "g130-comp.txt" --- a c130 GNFS factorization Factor base limits: 7200000/7200000 That said, for relatively crude estimates suitable for plugging into msieve where most of the other parameters are chosen for you, your approach is very likely good enough and, as you imply, rather simpler. Paul |
|
|
|
|
|
|
#588 |
|
"Ed Hall"
Dec 2009
Adirondack Mtns
384010 Posts |
My memory is giving me fits and the search routine is not friendly this time!
It seems (memory-wise) that I had a discussion in one of the threads about adding spairs.out.T1, ...T2, ...T3, etc. to the spairs.out file by simply placing the file(s) in the directory with spairs.out.T0. This did not work for me in my present setup. I have a WinXP machine running AliWin/Aliqueit/factmsieve.py and it is proceeding along rather slowly with gnfs-lasieve4I13e, but it is progressing. I copied test.job.T0 to another WinXP machine and modified the q0 and qintsize values to sieve a distant range from the original machine. I sent the relations to spairs.out.T1 and when it finished, I copied the file into the original location on the first machine. This did not invoke an automatic inclusion when spairs.out.T0 was added to spairs.out. ![]() I have since grabbed a copy of spairs.out.gz, moved it to a third (linux, this time) machine, uncompressed it, manually added spairs.out.T1 to the end, recompressed it and placed it back in the original directory with hopes that I didn't corrupt the file. How bad is my recollection and what have I possibly missed in my procedure that would make it easier? Thanks for any/all comments. |
|
|
|
|
|
#589 |
|
"Ed Hall"
Dec 2009
Adirondack Mtns
28·3·5 Posts |
OK, that didn't work. I'm giving a try to appending spairs.out.T1 to test.dat, which makes more sense to me this morning...
|
|
|
|
|
|
#590 | |
|
"Frank <^>"
Dec 2004
CDP Janesville
2·1,061 Posts |
Quote:
That will work, since test.dat is where the relations need to end up. Next time, rename the relations from the "other" machine "spairs.add". If you run a job on only a single machine, the script only looks for relations coming from one machine, but there is always a check for a file called spairs.add on the #1 machine no matter how many machines are specified..... |
|
|
|
|
|
|
#591 | |
|
"Ed Hall"
Dec 2009
Adirondack Mtns
F0016 Posts |
Quote:
![]() I have reverted back to the original test.dat and changed spairs.out.T1 to spairs.add. It should cycle soon and I can see how well it works. I also have another batch finishing up soon on the second machine. Hopefully these will all add in nicely... Thanks! |
|
|
|
|
|
|
#592 |
|
Nov 2007
Halifax, Nova Scotia
23·7 Posts |
I have a few questions about polynomial selection for GNFS using GGNFS+msieve+factmsieve.py; I'm not sure if this is the right place to post such questions.
Is it possible to have factmsieve.py use multiple CPU cores for GNFS polynomial selection? I assume that msieve supports it. Perhaps everyone is using CUDA to do polynomial selection so it's a moot point? However, I don't have CUDA cards on my systems, and I think that there's something big to be said for a fire-and-forget style tool. The problem is that I currently end up having cores sit idle while polynomial selection is going on. If you use a lot of threads during sieving then this becomes very wasteful. A simple solution would be to have factmsieve.py quit after performing polynomial selection. That way it would be possible to resource manage properly, i.e. one could call factmsieve.py once, expecting one CPU to be used, and when the program quits call it again, expecting multiple threads. Is there a command-line option for factmsieve.py to do this? |
|
|
|
|
|
#593 | |
|
May 2008
44716 Posts |
Quote:
msieve currently does not have multi-threading capability for the polynomial selection, but you can invoke multiple msieve instances with different ranges for the leading algebraic coefficient and take the best poly found from all instances. If I've understood right, Yafu already does this, so you may want to try that first. |
|
|
|
|
|
|
#594 |
|
"Ben"
Feb 2007
7×503 Posts |
Yep... yafu will do multi-threaded poly selection automatically. Behind the scenes it does just what you suggest - runs multiple instances of msieve with different ranges of leading coefficient.
Last fiddled with by bsquared on 2011-06-17 at 00:29 |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Installation of GGNFS | LegionMammal978 | Msieve | 17 | 2017-01-20 19:49 |
| Running other programs while running Prime95. | Neimanator | PrimeNet | 14 | 2013-08-10 20:15 |
| Error running GGNFS+msieve+factmsieve.py | D. B. Staple | Factoring | 6 | 2011-06-12 22:23 |
| GGNFS or something better? | Zeta-Flux | Factoring | 1 | 2007-08-07 22:40 |
| ggnfs | ATH | Factoring | 3 | 2006-08-12 22:50 |