![]() |
[QUOTE=fivemack;376528][code]
Mon Jun 23 09:24:43 2014 prp62 factor: 61485052172622249945361917879233929833963320045488274066924321 Mon Jun 23 09:24:43 2014 prp63 factor: 379822688669274657821765647599492668951035371288584739667986329 Mon Jun 23 09:24:43 2014 prp71 factor: 24645936917778708462745824067630552339820614889500664865964520963713273 [/code] 34.5 hours for 6.2M matrix on i7/2600 -t3 Log at [url]http://pastebin.com/BV6TACgs[/url] [B]But the factors have already been in factordb for two months[/B]; they were added 2 April 2014 at 1830 and 14 April 2014 at 1126. The P62 is factor 16282 in Paul's file [url]http://www.leyland.vispa.com/numth/factorization/cullen_woodall/2014q1.txt[/url] and is quite an impressive ECM hit - the ECM data is in factordb. I have checked that none of the current-queued 14e numbers are currently factored in factordb.[/QUOTE]Oh dear :sad: I'm sorry everyone wasted so much work. This one obviously fell through the gaps. Rob Hooft has been pre-testing the NFS@Home candidates and I remember his p62 hit because it was the largest ECM factor yet found on my tables. Either Greg and/or Lionel weren't informed of the result or they didn't check the factorization tables; either way that candidate should have been removed from NFS@Home queue. |
[QUOTE=Mini-Geek;376417]Taking GW_2_757[/QUOTE]
Factors pretty evenly: p86*p89 [CODE]prp86 factor: 41204974793675077659428219868058342930164756826112053219387865477999140398399782584709 prp89 factor: 16900356158229361845274688068630988327742249835914398415296509818742049932857119804465479[/CODE] L1902 is underway. Approx. 1 week for a 15.25M matrix, memory use 5900 MB. I set target density at 120, ended up with 115.37; for future reference, is that probably a good density for a job like this, or too low/high? This is my 4000th post! :blahblah: |
That's a pretty good density: I usually use 112 just because it seemed a natural step down from 128 which hadn't worked for me, but 120 is also good if it works ...
|
Taking GW_2_756, which ought to be reasonably quick (ETA Wednesday afternoon)
After that, I will do GC_3_478 (ETA Monday morning) I'm assuming that Lionel's slightly vague ' I'll post-process two 14e numbers. ' of yesterday has turned into GW_7_270 and GC_2_757 as it says on the status page [url]http://escatter11.fullerton.edu/nfs/crunching.php[/url] |
[B]But the factors have already been in factordb for two months[/B]; they were added 2 April 2014 at 1830 and 14 April 2014 at 1126.
Sorry about that! End of semester transition was obviously rougher than I thought. :no: |
[QUOTE=frmky;376559][B]But the factors have already been in factordb for two months[/B]; they were added 2 April 2014 at 1830 and 14 April 2014 at 1126.
Sorry about that! End of semester transition was obviously rougher than I thought. :no:[/QUOTE] Don't worry about it. Would you mind re-queueing 5W327 with the right polynomial? |
Post-processing GC_2_757 at target density 87 aborts with
[code]matrix is 7366788 x 7338027 (2613.3 MB) with weight 770623888 (105.02/col) sparse part has weight 604332152 (82.36/col) filtering completed in 2 passes matrix is 7366355 x 7337593 (2613.2 MB) with weight 770610007 (105.02/col) sparse part has weight 604327951 (82.36/col) matrix starts at (0, 0) matrix is 7366355 x 7337593 (2613.2 MB) with weight 770610007 (105.02/col) sparse part has weight 604327951 (82.36/col) matrix needs more columns than rows; try adding 2-3% more relations[/code] And I'm the one who put it in "queued for post-processing" state, because I thought that ~133M raw relations would be more than enough... There are 107281034 unique relations, so the duplicate ratio is in the usual ballpark Should I try to do more sieving on my own, or rather try and get rid of several million unique relations ? |
[QUOTE=debrouxl;376563]
Should I try to do more sieving on my own, or rather try and get rid of several million unique relations ?[/QUOTE] I'd just run with target density 70; it should work and it shouldn't much increase the runtime. |
1 Attachment(s)
[QUOTE=Mini-Geek;376552]L1902 is underway. Approx. 1 week for a 15.25M matrix, memory use 5900 MB.[/QUOTE]
[CODE]commencing Lanczos iteration (4 threads) memory use: 5901.7 MB[/CODE] Actual memory usage: 10933.2 MB Looks like msieve can't count. :ermm: To my recollection, that "memory use" figure usually is pretty close to the actual memory used during the LA. Does anyone know why it does not in this particular post-processing? Log (so far) is attached. |
Even at density 70, GC_2_757 still aborts with:
[code]building initial matrix memory use: 2850.9 MB read 7912027 cycles matrix is 7940788 x 7912027 (2361.5 MB) with weight 702327386 (88.77/col) sparse part has weight 532021020 (67.24/col) filtering completed in 2 passes matrix is 7940026 x 7911264 (2361.4 MB) with weight 702306193 (88.77/col) sparse part has weight 532015715 (67.25/col) matrix starts at (0, 0) matrix is 7940026 x 7911264 (2361.4 MB) with weight 702306193 (88.77/col) sparse part has weight 532015715 (67.25/col) matrix needs more columns than rows; try adding 2-3% more relations[/code] |
This is mostly out of curiosity, and I don't really expect it to work, but could you try decompressing the input file (if it's compressed), and filtering out bad lines by
[code] mv msieve.dat unfiltered grep -P '^-?[0-9]+,[0-9]+:[0-9a-fA-F,]*:[0-9a-fA-F,]*$' unfiltered > msieve.dat [/code] and trying again? |
| All times are UTC. The time now is 23:01. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.