![]() |
Taking L1878 (eta Monday night)
|
[QUOTE=Mini-Geek;374457]Taking GW_5_328[/QUOTE]
This LA is taking longer than I expected. I think it's a bit undersieved, wanted to see if y'all agree... It has 120921902 relations and 95077647 unique relations. On other recent numbers, I had used a target density of 120 successfully. I tried doing this one at 128, and that failed... [CODE]Wed May 28 10:57:24 2014 relations with 0 large ideals: 11551 Wed May 28 10:57:24 2014 relations with 1 large ideals: 2927 Wed May 28 10:57:24 2014 relations with 2 large ideals: 27120 Wed May 28 10:57:24 2014 relations with 3 large ideals: 235004 Wed May 28 10:57:24 2014 relations with 4 large ideals: 1132056 Wed May 28 10:57:24 2014 relations with 5 large ideals: 3301933 Wed May 28 10:57:24 2014 relations with 6 large ideals: 6103141 Wed May 28 10:57:24 2014 relations with 7+ large ideals: 16262407 Wed May 28 10:57:24 2014 commencing 2-way merge Wed May 28 10:57:44 2014 reduce to 18567418 relation sets and 18203821 unique ideals Wed May 28 10:57:44 2014 commencing full merge Wed May 28 11:16:30 2014 memory use: 839.2 MB Wed May 28 11:16:35 2014 found 35519 cycles, need 3815662 Wed May 28 11:16:35 2014 too few cycles, matrix probably cannot build[/CODE] So I went down to target 110, and it's working, but slowly, with an 8.5M matrix. [CODE]Wed May 28 14:05:09 2014 commencing linear algebra Wed May 28 14:05:10 2014 read 8504021 cycles Wed May 28 14:05:26 2014 cycles contain 26873451 unique relations Wed May 28 14:08:59 2014 read 26873451 relations Wed May 28 14:09:48 2014 using 20 quadratic characters above 1073741372 Wed May 28 14:11:29 2014 building initial matrix Wed May 28 14:16:34 2014 memory use: 3485.8 MB Wed May 28 14:16:40 2014 read 8504021 cycles Wed May 28 14:16:42 2014 matrix is 8503844 x 8504021 (3659.9 MB) with weight 1054526252 (124.00/col) Wed May 28 14:16:42 2014 sparse part has weight 874382210 (102.82/col) Wed May 28 14:18:20 2014 filtering completed in 2 passes Wed May 28 14:18:22 2014 matrix is 8503745 x 8503922 (3659.9 MB) with weight 1054523238 (124.00/col) Wed May 28 14:18:22 2014 sparse part has weight 874381322 (102.82/col) Wed May 28 14:18:46 2014 matrix starts at (0, 0) Wed May 28 14:18:48 2014 matrix is 8503745 x 8503922 (3659.9 MB) with weight 1054523238 (124.00/col) Wed May 28 14:18:48 2014 sparse part has weight 874381322 (102.82/col) Wed May 28 14:18:48 2014 saving the first 48 matrix rows for later Wed May 28 14:18:50 2014 matrix includes 64 packed rows Wed May 28 14:18:51 2014 matrix is 8503697 x 8503922 (3508.3 MB) with weight 891751007 (104.86/col) Wed May 28 14:18:51 2014 sparse part has weight 834635197 (98.15/col) Wed May 28 14:18:51 2014 using block size 8192 and superblock size 589824 for processor cache size 6144 kB Wed May 28 14:19:26 2014 commencing Lanczos iteration (4 threads) Wed May 28 14:19:26 2014 memory use: 2979.0 MB Wed May 28 14:19:56 2014 linear algebra at 0.0%, ETA 45h17m Wed May 28 14:20:05 2014 checkpointing every 200000 dimensions [/CODE](full log at [url]http://pastebin.com/Xc8xy28B[/url]) (current status 16.7%, ETA 37h22m) So the whole LA is going to take around 44.5h instead of ~18h (GW_4_381's time). I doubt it's really worth the trouble to sieve more and restart the postprocessing. I'm mainly curious as to the reason. For reference, GW_4_383 had a target of 110, 82M unique relations, and the LA took 29 hours (7.1M matrix). These numbers had the same SNFS difficulty: 235. I'm not sure why that matrix ended up so much smaller, my undersieving hypothesis doesn't seem to match that... |
Eight CPU-days of post-processing for a job that took a CPU-year or so to sieve doesn't seem particularly unreasonable.
Some SNFS jobs are significantly easier than others for integers of the same size - the E-value (4.512e-13 for this number, 1.022e-12 for GW_4_381) reported by msieve isn't a bad metric. 328*5^328-1 = 205000*(5^54)^6-1 has quite a large leading coefficient, the per-Q yields are rather low [code]total yield: 952, q=90001003 (0.21419 sec/rel)[/code] I might well have run it with 31-bit large primes or 15e if I were doing it at home. |
[QUOTE=fivemack;374539]Eight CPU-days of post-processing for a job that took a CPU-year or so to sieve doesn't seem particularly unreasonable.
Some SNFS jobs are significantly easier than others for integers of the same size - the E-value (4.512e-13 for this number, 1.022e-12 for GW_4_381) reported by msieve isn't a bad metric. 328*5^328-1 = 205000*(5^54)^6-1 has quite a large leading coefficient, the per-Q yields are rather low [code]total yield: 952, q=90001003 (0.21419 sec/rel)[/code] I might well have run it with 31-bit large primes or 15e if I were doing it at home.[/QUOTE] What about 328*(5^55)^6 -25? I don't have a grasp yet about when it's profitable to do a higher-difficulty number to get the coefficients closer to the same size. In base 2, I'd pick the higher-difficulty every time, but it's often just one digit worse; is base 5 just too big an increase in difficulty when multiplying by 5^2, or would test-sieving be in order to do this number at home? Edit: I'll test-sieve when I get home and answer my own question. |
Maybe also try 41*(2*5^55)^6 - 200 ?
|
Tests for GW_5_328:
Poly: 328*(5^55)^6-25 Difficulty 234 Factor base 45M, test sieve at q=15M, 22M, 29M, q-range of 500 tested. 30bit, 15e 0.44/0.49/0.54 sec/rel, yield 2.3/2.9/1.9 30bit, 14e 0.30/0.34/0.37 sec/rel, yield 1.4/1.7/0.9 31bit, 14e 0.17/0.18/0.21 sec/rel, yield 2.5/2.9/1.6 31bit, 15e 0.23/0.26/0.30 sec/rel, yield 4.4/5.6/3.5 32bit, 14e 0.099/0.110/0.122 sec/rel, yield 3.6/4.8/2.9 Poly: 205000*(5^54)^6-1 Difficulty 232 Factor base 42M, test sieve at q=14M, 21M, 28M, q-range 500 again 30bit, 15e 0.52/0.64/0.71 sec/rel, yield 2.0/1.9/1.8 30bit, 14e 0.37/0.44/0.51 sec/rel, yield 1.1/1.0/1.0 31bit, 14e 0.21/0.24/0.29 sec/rel, yield 2.0/1.7/1.8 31bit, 15e 0.29/0.34/0.38 sec/rel, yield 3.8/3.4/3.4 32bit, 14e 0.12/0.14/0.16 sec/rel, yield 3.0/3.0/2.7 The alternate poly is about 20% faster, and has better yield. 31 bits and 14e sieve looks like the fastest choice for either poly, though I should have chosen a wider spread for test q's to see how performance falls off; 31 bits/15e might be faster if yield is bad above 60M. Third Poly: 41*(2*5^55)^6-200 Difficulty 235 Factor Base 46.8M, test sieve at q=15.6M, 22.6M, 29.6M 31bit, 14e 0.18/0.21/0.22 sec/rel, yield 2.3/2.7/1.9 31bit, 15e 0.24/0.28/0.30 sec/rel, yield 4.0/5.0/3.7 |
I didn't realize that the SNFS difficulty (as shown on NFS@Home's site) didn't (fully? at all?) account for the quality of the polynomial. Now things make more sense, thank you all. :smile: (LA will finish in under two hours, factors soon...)
|
GW_5_328:
[CODE]Fri May 30 09:53:14 2014 prp68 factor: 36070729183349044578826139337631001163466621497518182060460207882497 Fri May 30 09:53:14 2014 prp153 factor: 177652182127554317633013588855667184330024019547711538211195667616300243299857200431675710179263586004375057370491608406446473853790901087362035194139033 [/CODE] Log: [url]http://pastebin.com/zrhwAemB[/url] |
[QUOTE=Mini-Geek;374579]I didn't realize that the SNFS difficulty (as shown on NFS@Home's site) didn't (fully? at all?) account for the quality of the polynomial. Now things make more sense, thank you all. :smile: (LA will finish in under two hours, factors soon...)[/QUOTE]
Difficulty is simply a report of the input number to the SNFS polynomial; it ignores factors found because they don't help the SNFS process. If you multiply the input number by a constant to generate a lower-coeff poly, the difficulty goes *up*, even though the time to sieve goes down if a better poly is produced (as shown in my test-sieves). Just like with GNFS, the Murphy-E score is the most useful single measure of a project's length. |
I'll take GW_3_480 next.
|
GW_3_480 splits as:
[CODE]prp107 factor: 75459052010792357332187545900034145096359342679522332939386097305826953891124342765279028026368835757430469 prp120 factor: 615434574018697065474164844871817072246192777361841174938759752035748764103418335564021148275354263935723639974282151403[/CODE]13.5 hrs to solve a 4.9M matrix using -t 4 target_density=118 on Core-i5. |
| All times are UTC. The time now is 23:02. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.