![]() |
[QUOTE=fivemack;421370]Nine hours is pretty short (though remember that post-processing saves checkpoints, and you can stop with ^C and restart with -npr); the newly-queued SNFS(22x) jobs from XYYXF may be small enough, C220_120_79 is the one I'd go for.[/QUOTE]
I am aware of the checkpoint is a side of things. The reason why I like short numbers is as is because as a rule they are smaller to download and turnaround postprocessing is faster. Unfortunately I missed the number that you suggested. I am a look at taking C170_122_63, as it is a 220 number. If somebody wants it before I reserve it please go ahead |
C170_122_63 reservation query
[QUOTE=Speedy51;422610]I am aware of the checkpoint is a side of things. The reason why I like short numbers is as is because as a rule they are smaller to download and turnaround postprocessing is faster. Unfortunately I missed the number that you suggested. I am a look at taking C170_122_63, as it is a 220 number. If somebody wants it before I reserve it please go ahead[/QUOTE]
Thank you whoever put my name beside this number. However I notice it has been reserved under my forum name and not my real name Jarod McClintock. Does anybody have an idea of how big the DAT file will be & how long it will take to run on a I 7 5960 at stock speed 3 GHz 16 gig RAM. Thanks for the information |
It looks like many other jobs you've done. It's around 59 bytes per relation (compressed), and the page suggests ~110M relations for 30 bit large primes, so around 6 GiB. The time should certainly be less than a day, though I don't know which side of 12 hours it would be.
|
[QUOTE=Dubslow;422647]It looks like many other jobs you've done. It's around 59 bytes per relation (compressed), and the page suggests ~110M relations for 30 bit large primes, so around 6 GiB. The time should certainly be less than a day, though I don't know which side of 12 hours it would be.[/QUOTE]
Thanks. Can I use the following command to start downloading the file? I am aware there are certain parts missing [c]wget --continue --limit-rate=64k --user= --password= http://escatter11.fullerton.edu/nfs_data/C170_122_63/C170_122_63.dat.gz; sleep 3600; done[/c] if I download the file this way do I have to to any anything special at the end all well it all joined as one file & I gather we can do whatever we want in that the – limit rate =? This is a first time I have used such a command. |
[QUOTE=Speedy51;422659]Thanks. Can I use the following command to start downloading the file? I am aware there are certain parts missing [c]wget --continue --limit-rate=64k --user= --password= http://escatter11.fullerton.edu/nfs_data/C170_122_63/C170_122_63.dat.gz; sleep 3600; done[/c] if I download the file this way do I have to to any anything special at the end all well it all joined as one file & I gather we can do whatever we want in that the – limit rate =? This is a first time I have used such a command.[/QUOTE]
You are downloading one file, and one file you will get, just like all the other times you've post-processed. --limit-rate tells wget what the maximum download rate (bandwidth usage) should be. |
Reserving 8821_61_minus1.
|
C175_4788_5241
1 Attachment(s)
The composite from aliquot sequence 4788 has been factored with a very nice split, p2/p1 < 10:
[code]commencing square root phase reading relations for dependency 1 read 3876794 cycles cycles contain 13857722 unique relations read 13857722 relations multiplying 13857722 relations multiply complete, coefficients have about 766.73 million bits initial square root is modulo 7563427 sqrtTime: 2414 p87 factor: 430802617242534521281556392152002775332022453248704238219208064962458560609604821206017 p88 factor: 3624044865067865720135307373451057373557807330604385686289032335250978044812423406553167 elapsed time 00:40:15[/code] It took a bit shy of 70 hours on all 8 threads of a i7-2600K, though it's my personal computer and was not idle all the time, including at least two hours where the LA was totally paused. The log will show my various attempts at getting msieve file names and switches correct -- among other things, apparently -ncr doesn't imply -nc3. Edit: The matrix was around 7.75M, built with target_density=130 (which, as I understand, is quite high, but it also seemed quite over-sieved to me). [code]Fri Jan 15 04:42:30 2016 commencing linear algebra Fri Jan 15 04:42:31 2016 read 7752972 cycles Fri Jan 15 04:42:48 2016 cycles contain 27716916 unique relations Fri Jan 15 04:52:25 2016 read 27716916 relations Fri Jan 15 04:53:12 2016 using 20 quadratic characters above 4294917295 Fri Jan 15 04:54:53 2016 building initial matrix Fri Jan 15 05:00:15 2016 memory use: 3857.5 MB Fri Jan 15 05:00:19 2016 read 7752972 cycles Fri Jan 15 05:00:21 2016 matrix is 7752793 x 7752972 (3916.4 MB) with weight 1189883819 (153.47/col) Fri Jan 15 05:00:21 2016 sparse part has weight 933627118 (120.42/col) Fri Jan 15 05:02:02 2016 filtering completed in 2 passes Fri Jan 15 05:02:04 2016 matrix is 7752457 x 7752636 (3916.4 MB) with weight 1189865271 (153.48/col) Fri Jan 15 05:02:04 2016 sparse part has weight 933620115 (120.43/col) Fri Jan 15 05:03:42 2016 matrix starts at (0, 0) Fri Jan 15 05:03:43 2016 matrix is 7752457 x 7752636 (3916.4 MB) with weight 1189865271 (153.48/col) Fri Jan 15 05:03:43 2016 sparse part has weight 933620115 (120.43/col) Fri Jan 15 05:03:43 2016 saving the first 48 matrix rows for later Fri Jan 15 05:03:45 2016 matrix includes 64 packed rows Fri Jan 15 05:03:46 2016 matrix is 7752409 x 7752636 (3815.1 MB) with weight 1026143909 (132.36/col) Fri Jan 15 05:03:46 2016 sparse part has weight 922568351 (119.00/col) Fri Jan 15 05:03:46 2016 using block size 8192 and superblock size 786432 for processor cache size 8192 kB Fri Jan 15 05:04:21 2016 commencing Lanczos iteration (8 threads) Fri Jan 15 05:04:21 2016 memory use: 3194.3 MB[/code] |
C219_127_57 done
1 Attachment(s)
[code]
Mon Jan 18 08:21:15 2016 p53 factor: 30616003696099954381642063749711730195963203825482041 Mon Jan 18 08:21:15 2016 p166 factor: 7714764781728414548741417186472049141202290325908411867168772990125828157277628427664514249440253270539277416395045457380765814639295127918491951474043017703261065229 [/code] 20.9 hours for 6.58M density-120 matrix on seven cores E5-2650v2 (probably not entirely idle). Log attached. |
2 Attachment(s)
C220_120_79: 11 hours for pretty small 5.3M matrix (TD=120)
[CODE]prp53 factor: 11128045331010217413015279396287804977183688795427299 prp168 factor: 174886193797798712318441610702475795938459285367516715405060911091657516643307944940231653782840826902173758801359189202098933930086984072963064666660932198044232626163 [/CODE]C170_119_79: 18.7 hours for 6.7M matrix (TD=120) [CODE]prp59 factor: 62816776104045212956746785494921086815091413384959342959841 prp111 factor: 199015708111943239095750091572500193580824535235215583939669395129315035963269076644650780893898808146459131781 [/CODE] Logs attached. |
128^95+95^128 under sieved?
Does 128^95+95^128 have enough relations to ultimately build a matrix? I can attempt to postprocess it but TD will probably be very low. While I am aware that 1 day BOINC >> 1 day on my machine, wouldn't a bit more sieving be prudent? Either way it will be a few days before I can start downloading.
ETA: 142^141+141^142 should be factored tomorrow. |
I've put 128^95+95^128 up for another 20MQ sieving - if you won't be downloading it for a few days it might as well be being sieved for those few days. Should get to nearly 400M relations which ought to run happily at td=120
131^97+97^131 is going to be another ten days or so. |
| All times are UTC. The time now is 23:06. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.