![]() |
|
|
#441 |
|
Oct 2006
4048 Posts |
How do you input a polynomial from an earlier factorization effort? (I'm using the two files idea, then will combine them when optimal; however the polynomial selection process has already been completed on my first file, and I'd like to use the same one to save time. If this process is only for the qs, please tell me so I don't do something stupid
)If it's any difference, I'm using an earlier version of Msieve, probably in the 1.35-1.38 area - will update once this one's over. Thanks! |
|
|
|
|
|
#442 | |
|
Tribal Bullet
Oct 2004
354310 Posts |
Quote:
Note that unlike the QS code, the NFS code needs to be told which part of the search space to explore, using arguments to -ns Last fiddled with by jasonp on 2009-04-25 at 20:11 Reason: -np -> -ns |
|
|
|
|
|
|
#443 |
|
Oct 2006
1000001002 Posts |
Would I still be able to combine the two .dat files afterward and use the combined relations to complete the factorization?
Thanks! |
|
|
|
|
|
#444 |
|
Mar 2008
5×11 Posts |
If you use the same polynomial file (or factor base file as Jason put it) then you can combine them for the post processing parts of it. The main problem you might run into is that you have to tell each one to sieve a different area, or you have duplicated relations, which will get happily filtered out ( == wasted effort.)
|
|
|
|
|
|
#445 |
|
Oct 2006
26010 Posts |
Okay thanks!
The 'wasted effort' thing is actually all right for me: when I run msieve, only half of the computer resources (1 core) are being used. So if I can run two instances, it would double production. I would combine the two files once I know that I have enough relations (including the duplicated ones) at the very end, and have taken much less time total. |
|
|
|
|
|
#446 |
|
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
7×292 Posts |
what is the reason that msieve does several singleton removal runs before doing an in-memory run?
is this to keep compatibility with lower memory computers for small numbers or is it faster to do it that way? i am pretty certain that a 2GB machine could do in-memory singleton removal on >5M relations |
|
|
|
|
|
#447 |
|
Tribal Bullet
Oct 2004
DD716 Posts |
It's not faster to start off with disk-based passes; in fact for large runs over 75% of the filtering time goes into the duplicate and singleton removal because they are disk based.
I don't have a good feeling for what to do here. Windows will tell you how much memory the machine has, but there's no portable way to do so on unix systems. Even on windows, should msieve assume it can use all of your memory? If yes, it actually is not hard to estimate the memory use of the singleton removal after one in-memory pass, because the in-memory pass uses disk once to format the data properly :) Just reusing the information from there can save lots of time if the code determines some passes have to be repeated. |
|
|
|
|
|
#448 |
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
251916 Posts |
You could leave that memory determination as a burden for the user (a yet another command-line parameter, with old behaviour if not supplied)...
On linux you could also spawn "top -n 1" and parse its output. Dirty, yes, I know, but should work. |
|
|
|
|
|
#449 |
|
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
10110111111112 Posts |
|
|
|
|
|
|
#450 |
|
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
7·292 Posts |
based on the conversation above i have been looking at the filtering code of msieve
would commenting out the loop in void nfs_purge_singletons_initial(msieve_obj *obj, factor_base_t *fb, filter_t *filter) { remove the disk based singleton removal without any side effects(except higher memory usage obviously) |
|
|
|
|
|
#451 |
|
Tribal Bullet
Oct 2004
67278 Posts |
In order to arrange the singleton removal so that no disk-based passes happen, you should comment out the loop and also the call to purge_singletons_pass1, then add code to rename the duplicate (".d") file into a singleton (".s") file. The file just contains the line numbers of relations that got pruned. This will make the code go straight to the in-memory pass; if you have a lot of excess relations, you may need an extra in-memory pass or two, because the heuristics in the filtering assume most of the singletons are rmoved by the time the in-memory pass happens.
Maybe I should make that automatic for small problems. |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| How I Run a Larger Factorization Using Msieve, gnfs and factmsieve.py on Several Ubuntu Machines | EdH | EdH | 7 | 2019-08-21 02:26 |
| Compiling Msieve with GPU support | LegionMammal978 | Msieve | 6 | 2017-02-09 04:28 |
| Msieve with GPU support | jasonp | Msieve | 223 | 2011-03-11 19:30 |
| YAFU with GNFS support | bsquared | YAFU | 20 | 2011-01-21 16:38 |
| 518-bit GNFS with msieve | fivemack | Factoring | 3 | 2007-12-25 08:53 |