![]() |
|
|
#56 | |
|
May 2008
3·5·73 Posts |
Quote:
I'd suggest breaking off ranges of 1000 for the a_d (e.g. 1~1000, 1001~2000, etc) for each instance you run, and keep track of the last range issued so you know which range to issue next each time an instance finishes. |
|
|
|
|
|
|
#57 |
|
Sep 2009
207810 Posts |
Thanks, that gives me something to work with. It seems the time to search a given range increases with target size nearly in proportion to the time needed for sieving. But that's only from 3 numbers, the RSA100 and RSA110 tests shipped with GGNFS and the c145 from 101!+2. They are the only numbers I've used msieve 1.48 for polynomial selection.
I can't be very clever, I just set $NUM_CPUs tasks running and wait until they have all finished. But it will certainly be better than running 1 task and leaving the other CPUs idle or manually combining results at 2am. I may just ship a script that searches a constant range, eg 1-4000. That will probably work OK for fire-and-forget work. I'm also looking at thresholds for sievers. RSA110 is faster with gnfs-lasieve4I12e than 13e, I think RSA100 will be faster with 11e than 12e (testing in progress). Chris K |
|
|
|
|
|
#58 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
72×131 Posts |
This is the script that I use for working on aliquot sequences (I don't bother with parallelism for GNFS of less than 125 digits, just run one sequence per CPU); it includes a reasonably carefully-chosen set of thresholds for which siever to use when.
|
|
|
|
|
|
#59 |
|
Sep 2009
2·1,039 Posts |
Hello,
Here's my script to do msieve polynomial selction in parallel. I've chopped out all the code for pol51opt etc which makes it a lot shorter. And added several other features. Bless "man perlthrtut" for helpful advice. It still needs tuning for how many leading coefficients to check as a function of input size. And a few other places I've put FIXMEs. msieve generates much better polynomials for degree 4 that the old way. And I'm using fivemack's thresholds for 11e, 12e 13e sievers which should help as well. Other bits include copying and counting new relations in one pass (a small speedup) and putting more details into the log (via a routine to write messages to the screen and the log). Note it probably needs a later version of perl than factMsieve.pl because I'm using shared variables to keep track of how much work each thread has done. Could someone please test it under Windows. I don't have any Windows systems so I can't guarantee it will work. Chris K |
|
|
|
|
|
#60 |
|
Sep 2009
2·1,039 Posts |
After further testing it now works with "use warnings" without complaint. Any perl program more that a page of code should have that. And I fixed several minor bugs in the process.
I ought to make it work with "use strict" (force all variables to be declared explicitly) but that's a lot more work. Would it be worthwhile to split the square root steps over several threads? I can see how to do it if the target has only 2 factors. But it's more "interesting" if it has more that 2 factors. What would msieve put in it's log if told to run only 1 square root and found a factor but it or the cofactor is composite? Chris K |
|
|
|
|
|
#61 |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
10,753 Posts |
|
|
|
|
|
|
#62 | |
|
Tribal Bullet
Oct 2004
3,541 Posts |
Quote:
The bigger concern with the square root is that every dependency reads in all of the relations it needs, under the assumption that you're only running one dependency and can save reading half of the relations into memory. Hence the memory use will increase linearly as you run more dependencies in parallel. A better option would be to multithread a single dependency, but that would require fairly significant code changes. Even better would be to do the square root modulo several small primes and combine the results via the Chinese Remainder Theorem. This would easily run in parallel without increased memory use for a single dependency but the changes would be even more significant (the CADO toolkit does do this). Last fiddled with by jasonp on 2011-04-04 at 02:25 |
|
|
|
|
|
|
#63 |
|
Sep 2009
2·1,039 Posts |
OK, it looks as if running several square roots at once isn't worthwhile. The square root phase is only a small fraction of the total run time and it would be a lot of work that only users with lots of memory could benefit from.
I'll look at use strict, but don't hold your breath waiting for it. Chris K |
|
|
|
|
|
#64 |
|
Sep 2009
2×1,039 Posts |
Here is a version with use strict. It should be functionally equivalent to the previous version.
It would benefit from some more comments, eg explaining what all the global variables do, but that's a lot of work. Chris K |
|
|
|
|
|
#65 |
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
36·13 Posts |
An unusually good (judging from a spline of a couple dozen gnfs jobs, ranging from c149 to c187) polynomial came out overnight for the 10,293- c158:
Code:
22468299182926691249750949367562044927205538070590508331604856472240083043044809556455384578844088412992861912459289768255929608442462011150780165716818926119 Code:
# norm 3.146336e-15 alpha -8.270854 e 2.184e-12 rroots 5 c5: 17220 ... |
|
|
|
|
|
#66 | |
|
Jun 2005
lehigh.edu
40016 Posts |
Quote:
factor (with B1=900M) on our new cluster of 8-core AMD chips. So gnfs gives us another p61; c158 = p61*p98. You didn't want me to find this 2nd p61 by ecm, presumably? That was 15 hours for the matrix in gnfs (with the boring poly, not the new super one that I missed). Anyone care to estimate how long to remove (to 80%) this second p61 by ecm? Still this one is c218 = p61*p61'*p98, sort-of unusual. -Bruce |
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Polynomial selection | Max0526 | NFS@Home | 9 | 2017-05-20 08:57 |
| SNFS Polynomial selection help? | mhill12 | Factoring | 59 | 2013-09-09 22:40 |
| Best way to scale polynomial selection | pastcow | Msieve | 6 | 2013-05-08 09:01 |
| 2^877-1 polynomial selection | fivemack | Factoring | 47 | 2009-06-16 00:24 |
| Polynomial selection | CRGreathouse | Factoring | 2 | 2009-05-25 07:55 |