![]() |
F1361 now started; 39.15M matrix on 16 cores E5-2650v1, ETA March 27th.
|
C164_140xx289_7 phi_7(largest factor of phi_7(phi_13(7))) factored
1 Attachment(s)
[QUOTE=richs;479950]Reserving C164_140xx289_7 phi_7(largest factor of phi_7(phi_13(7))) in 14e[/QUOTE]
[CODE]p64 factor: 6066887398194432577444110333739654184687284554543212625495468119 p100 factor: 3934210891069458617891085562188165606006344549911421032509843087017 919157780051398497000480133621451[/CODE] 46.6 hours on 2 threads Core i3-2310M with 4 GB memory for a 5.29M matrix at TD = 70 (didn't bother to try 130). Redacted log attached and at [url]https://pastebin.com/wRGG7cqU[/url] (removed about 30,000 consecutive relation error messages to fit forum and Pastebin size limits). Factors reported to factor database. |
Interested in C190_659xx917_5
I would be interested in taking it assuming it will fit into 8 or 9 gig of RAM. Would somebody be able to give me an estimate on how many hours it would take. If this number will not work with in 8 to 9 gig of RAM please feel free to recommend one . It may take me some time to process as my computer is running roughly 10 to 12 hours a day. I have a 5960 X running stock at 3 GHz. I want to be able to use machine to be able to do everyday tasks too. In advance for assistance
|
[QUOTE=Speedy51;481137]I would be interested in taking it assuming it will fit into 8 or 9 gig of RAM. Would somebody be able to give me an estimate on how many hours it would take. If this number will not work with in 8 to 9 gig of RAM please feel free to recommend one . It may take me some time to process as my computer is running roughly 10 to 12 hours a day. I have a 5960 X running stock at 3 GHz. I want to be able to use machine to be able to do everyday tasks too. In advance for assistance[/QUOTE]
As an example for memory use: 8_271_minus_5_271 SNFS(245) 31bits 242digits [code] Mon Oct 05 22:09:49 2015 commencing relation filtering Mon Oct 05 23:44:35 2015 memory use: 4378.6 MB ... Tue Oct 06 00:32:51 2015 commencing linear algebra ... Tue Oct 06 00:51:53 2015 matrix is 9890546 x 9890724 (4634.8 MB) with weight 1341255398 (135.61/col) [/code]That task is a lot smaller (SNFS 220 with 30bits) so should easily fit in 4GB of free memory. By the way: OddPerfect C185_226741_43b has 400M+ raw relations for a 31bit job? and OddPerfect C190_659xx917_5 has 241M raw relations for a 30bit job doesn't seem right? Oversieved by almost a factor of 2? |
5869^71-1 (OPN) is aiming at almost 1B relations over on 15e as well, for what it’s worth. (This is 32-bit job.)
If it will help things, I can grab 5869^71-1 once the # of relations is north of 460M and build a matrix. Once LA is underway and an ETA established, the sieving job can then be killed. |
[QUOTE=VictordeHolland;481157]As an example for memory use:
8_271_minus_5_271 SNFS(245) 31bits 242digits [code] Mon Oct 05 22:09:49 2015 commencing relation filtering Mon Oct 05 23:44:35 2015 memory use: 4378.6 MB ... Tue Oct 06 00:32:51 2015 commencing linear algebra ... Tue Oct 06 00:51:53 2015 matrix is 9890546 x 9890724 (4634.8 MB) with weight 1341255398 (135.61/col) [/code]That task is a lot smaller (SNFS 220 with 30bits) so should easily fit in 4GB of free memory. By the way: OddPerfect C185_226741_43b has 400M+ raw relations for a 31bit job? and OddPerfect C190_659xx917_5 has 241M raw relations for a 30bit job doesn't seem right? Oversieved by almost a factor of 2?[/QUOTE] Thank you for the information. I will reserve C200_213xx011_5assuming it will fit in 9 gig of RAM or less? Depending on how long it takes it could take me a couple of weeks to get the results from post process in if result is required in hurry someone else please feel free to take it. I am unsure where I got the numbers starting with C 190 from. It currently has just over 38,000 relations remaining. If somebody is keen to PM me the details on how I can download it in increments/parts please feel free to do so. Thanks. |
In order to perform postprocessing, you must get a login ID/password from Greg Childers (user “frmky”). He maintains the servers hosting NFS@Home. Until you get this you cannot proceed. See [url=http://www.mersenneforum.org/showpost.php?p=188690&postcount=1]this post by him[/url] and send him a PM.
Once you get in, find the directory named for the composite you are planning to postprocess. Right-click/‘save as’ the .ini file, the .fb file and the large .dat file (this file will be compressed). Save the three files in the directory containing msieve. Once downloaded, rename the .ini file to worktodo.ini. Rename the .fb file to msieve.fb. Decompress the .dat file with 7zip and rename it to msieve.dat. With Windows, open a cmd window and use the following command line in the directory containing msieve. [code] msieve -v -nc target_density=110 -t 4 [/code] Where the target_density value can be changed as desired; I’ve used 110 as a starting value, you may need to lower this value if msieve refuses to build a matrix and exits. You can also use higher values to attempt to build a denser matrix. If the target_density parameter is not used, the default value is 70. The -t 4 signifies 4 threads. Modify as necessary. [code] msieve -h [/code] will show all possible parameters and answer questions. I’ve kept the above simple - there are alternative ways to do some steps. But this should get you started. Linux is similar in use though syntax is a bit different of course. Post any questions you may have or if you have problems reaching Greg. |
Thank you yes I have information to get into server. Is it better to build a bigger matrix or smaller? I am aware figure will take longer
|
[QUOTE=Speedy51;481216]Thank you yes I have information to get into server. Is it better to build a bigger matrix or smaller? I am aware figure will take longer[/QUOTE]
I suggest you start smaller and go from there. A 30-bit job from the 14e queue can be completed in a day or two depending on hardware. You’ll factor the composite and see how it all works. Afterwards, if you’re interested, you can rerun the same job and change things like target_density to understand its effect on postprocessing. Just note I’ve been doing this for a few years and I’m still learning new things. Suggest you reserve C192_988xx249_11 (Phi_11(Phi_7(70841)/7/29/6301)) and go from there. A couple of notes: - one can download the data at any time, even when the column ‘Est. Pending Relations’ is > 0 without issue. - more relations is not always a good thing, the opposite in fact. In general, the number of relations required is centered on the large prime base (lpb) of the job, i.e. 30-bit in this case. 110-125M is all the relations you will need for a 30-bit job. The number doubles with increasing bit. Good luck! |
[QUOTE=swellman;481223]I suggest you start smaller and go from there. A 30-bit job from the 14e queue can be completed in a day or two depending on hardware. .
Suggest you reserve C192_988xx249_11 (Phi_11(Phi_7(70841)/7/29/6301)) Good luck![/QUOTE] Thank you for the suggestion. I would like to take the number in the paragraph above. I am aware number is ready. I will get in 5 to 6 hours from this post |
Taking C200_213xx011_5.
BTW, C190_659xx917_5 is really a 31-bit job. Just a misprint on the summary page. 250M relations is about right. Well, a little over. :smile: |
| All times are UTC. The time now is 22:54. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.