![]() |
1 Attachment(s)
Attached an excel-table (updated today) which gives estimations for SNFS and GNFS difficulty from 86 up to 295, based on factorizations I have done and factorizations that have been done by others (individuals, projects like NFS@Home, Aliquot, RSALS, Cunningham).
|
[QUOTE=em99010pepe;261566]Attached an excel-table (updated today) which gives estimations for SNFS and GNFS difficulty from 86 up to 295, based on factorizations I have done and factorizations that have been done by others (individuals, projects like NFS@Home, Aliquot, RSALS, Cunningham).[/QUOTE]
No disrespect intended, but your spreadsheet does not convey the difficulty. Instead of "raw relations", an appropriate measure would be sieve_area * #special_q needed * loglog pmax where pmax is the largest prime in the factor base. This expression represents the amount of sieving needed. |
[QUOTE=R.D. Silverman;261572]No disrespect intended, but your spreadsheet does not convey the difficulty.
Instead of "raw relations", an appropriate measure would be sieve_area * #special_q needed * loglog pmax where pmax is the largest prime in the factor base. This expression represents the amount of sieving needed.[/QUOTE] But that expression isn't given by msieve.log output file and I am using it to feed the data into the spreadsheet. I don't know where to find those variables in the mersenneforum for each factorization done. |
[QUOTE=em99010pepe;261574]But that expression isn't given by msieve.log output file and I am using it to feed the data into the spreadsheet. I don't know where to find those variables in the mersenneforum for each factorization done.[/QUOTE]I think you'll find all the needed information in the files left behind after a msieve factorization.
The sieve area is a simple function of x in the name gnfs-lasieve4Ixe --- look in the source code for the function. I could tell you what it is but it's educational for you to find out for yourself. The special-q is recorded in msieve's log files to a granularity of the sieving job sizes. That's easily precise enough for your purposes. The maximum prime in the factorbase should appear in the final ouptut file such as this one from "g130-comp.txt" --- a c130 GNFS factorization Factor base limits: 7200000/7200000 That said, for relatively crude estimates suitable for plugging into msieve where most of the other parameters are chosen for you, your approach is very likely good enough and, as you imply, rather simpler. Paul |
Adding spairs.out.T# files to spairs.out
My memory is giving me fits and the search routine is not friendly this time!
It seems (memory-wise) that I had a discussion in one of the threads about adding spairs.out.T1, ...T2, ...T3, etc. to the spairs.out file by simply placing the file(s) in the directory with spairs.out.T0. This did not work for me in my present setup. I have a WinXP machine running AliWin/Aliqueit/factmsieve.py and it is proceeding along rather slowly with gnfs-lasieve4I13e, but it is progressing. I copied test.job.T0 to another WinXP machine and modified the q0 and qintsize values to sieve a distant range from the original machine. I sent the relations to spairs.out.T1 and when it finished, I copied the file into the original location on the first machine. This did not invoke an automatic inclusion when spairs.out.T0 was added to spairs.out.:sad: I have since grabbed a copy of spairs.out.gz, moved it to a third (linux, this time) machine, uncompressed it, manually added spairs.out.T1 to the end, recompressed it and placed it back in the original directory with hopes that I didn't corrupt the file. How bad is my recollection and what have I possibly missed in my procedure that would make it easier? Thanks for any/all comments. |
OK, that didn't work. I'm giving a try to appending spairs.out.T1 to test.dat, which makes more sense to me this morning...
|
[QUOTE=EdH;262448]OK, that didn't work. I'm giving a try to appending spairs.out.T1 to test.dat, which makes more sense to me this morning...[/QUOTE](Sorry, I didn't get a chance to answer this yesterday...)
That will work, since test.dat is where the relations need to end up. Next time, rename the relations from the "other" machine "spairs.add". If you run a job on only a single machine, the script only looks for relations coming from one machine, but there is always a check for a file called spairs.add on the #1 machine no matter how many machines are specified..... |
[QUOTE=schickel;262453](Sorry, I didn't get a chance to answer this yesterday...)
That will work, since test.dat is where the relations need to end up. Next time, rename the relations from the "other" machine "spairs.add". If you run a job on only a single machine, the script only looks for relations coming from one machine, but there is always a check for a file called spairs.add on the #1 machine no matter how many machines are specified.....[/QUOTE] Ahh, that's the file! Too simple for me to remember - [B].add[/B]. :sad: I have reverted back to the original test.dat and changed spairs.out.T1 to spairs.add. It should cycle soon and I can see how well it works. I also have another batch finishing up soon on the second machine. Hopefully these will all add in nicely... Thanks! |
Polynomial selection with factmsieve.py
I have a few questions about polynomial selection for GNFS using GGNFS+msieve+factmsieve.py; I'm not sure if this is the right place to post such questions.
Is it possible to have factmsieve.py use multiple CPU cores for GNFS polynomial selection? I assume that msieve supports it. Perhaps everyone is using CUDA to do polynomial selection so it's a moot point? However, I don't have CUDA cards on my systems, and I think that there's something big to be said for a fire-and-forget style tool. The problem is that I currently end up having cores sit idle while polynomial selection is going on. If you use a lot of threads during sieving then this becomes very wasteful. A simple solution would be to have factmsieve.py quit after performing polynomial selection. That way it would be possible to resource manage properly, i.e. one could call factmsieve.py once, expecting one CPU to be used, and when the program quits call it again, expecting multiple threads. Is there a command-line option for factmsieve.py to do this? |
[QUOTE=D. B. Staple;263955]Is it possible to have factmsieve.py use multiple CPU cores for GNFS polynomial selection? I assume that msieve supports it. Perhaps everyone is using CUDA to do polynomial selection so it's a moot point? However, I don't have CUDA cards on my systems, and I think that there's something big to be said for a fire-and-forget style tool.[/QUOTE]
What size composites are you planning on factoring? If they are smaller than 155 digits then the CUDA version of msieve polynomial selection would probably be faster for you, otherwise use the CPU version. msieve currently does not have multi-threading capability for the polynomial selection, but you can invoke multiple msieve instances with different ranges for the leading algebraic coefficient and take the best poly found from all instances. If I've understood right, Yafu already does this, so you may want to try that first. |
[QUOTE=jrk;263956]... you can invoke multiple msieve instances with different ranges for the leading algebraic coefficient and take the best poly found from all instances.
If I've understood right, Yafu already does this, so you may want to try that first.[/QUOTE] Yep... yafu will do multi-threaded poly selection automatically. Behind the scenes it does just what you suggest - runs multiple instances of msieve with different ranges of leading coefficient. |
| All times are UTC. The time now is 22:39. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.