- **Information & Answers**
(*https://www.mersenneforum.org/forumdisplay.php?f=38*)

- - **100MD LL Test - Newb Question**
(*https://www.mersenneforum.org/showthread.php?t=14511*)

100MD LL Test - Newb QuestionI'm running a LL test on a 100MD assignment.
I just got done factoring to 76 bits (took 11 days). Now Prime95 is running a P-1 factor test and then I suppose it will start the LL. My questions are: 1) Technically speaking, how many bits is M[COLOR=#000000][FONT=Tahoma]332208529?[FONT=Verdana] (i.e., how far can I trial factor this despite the fact that Prime95 has determined that 76 bits is far enough?);[/FONT][/FONT][/COLOR] 2) I have [U]multiple systems[/U] available for running Prime95; a) On a different system, should I trial factor this assignment manually to 77 or 78 bits in an effort to better increase the odds that I am not wasting [B]A LOT[/B] of time on the primary LL assignment? b) On a different system, and after the P-1 test is complete and I get the B1 bound result, should I run some ECM curve tests at the same time? If so, how many curves? Thanks in advance for your insights. Neo |

[QUOTE=Neo;244180]1) Technically speaking, how many bits is M[COLOR=#000000][FONT=Tahoma]332208529?[FONT=Verdana] (i.e., how far can I trial factor this despite the fact that Prime95 has determined that 76 bits is far enough?);[/FONT][/FONT][/COLOR][/QUOTE]
M332208529 is 332208529 bits long, since Mersenne numbers are defined as 2^p-1, which in binary is always p bits long (just like 10^n-1 is always n digits long). The maximum that you might have to run trial factoring is the square root of the number, which is 332208529/2, or 166104264.5 bits. Each successive bit takes about twice as long as the one before it, or in other words about the time of all lower bit levels put together. I hope you can see that this is not possible in a universe time-scale, let alone practical. :smile: TF is not very useful except at the beginning, and only it and P-1 are useful as prefactoring before primality tests. Other tests like P+1, ECM, and NFS come in if you intend to fully factor a number, but it is FAR harder to fully factor than to test for primality (e.g. the smallest not-fully-factored Mersenne number is M929, the smallest not-primality-tested number is M37591483). [QUOTE=Neo;244180]2) I have [U]multiple systems[/U] available for running Prime95; a) On a different system, should I trial factor this assignment manually to 77 or 78 bits in an effort to better increase the odds that I am not wasting [B]A LOT[/B] of time on the primary LL assignment? b) On a different system, and after the P-1 test is complete and I get the B1 bound result, should I run some ECM curve tests at the same time? If so, how many curves?[/QUOTE] If your goal is to expect to finish the number in the shortest amount of CPU time, (in other words, most efficiently for the systems you have) you should only run the Prime95 default settings, regardless of how many systems are available. If your goal is to expect to finish the number in the shortest calendar time, you should run as much TF, P-1, and maybe even ECM as you can on as many systems are available, while doing the LL test with as many cores as possible on the fastest hardware available. I recommend, and all normal assumptions are based on, the former (most efficient use of CPU time), or at least much closer to that than the latter. Your other systems' times would be much better doing work on numbers besides this one (whether 100M digits or something else, GIMPS or not). But if it's this or nothing, this is better than nothing. This can change in favor of doing a few more bits of TF if you have a fast CUDA-capable (which requires that it be NVidia) GPU available to run TF on, whether that's on the LLing system or another. See [url]http://www.mersenneforum.org/showthread.php?t=12827[/url] (look near the end for the latest code) for a CUDA-based factoring app for Mersenne numbers. You should be aware that an LL test on a 100 million digit number will take a very long time, (look at Test > Status or Advanced > Time for an estimate on your computer) and that there's a somewhat high chance of an error some time during that, which would invalidate the result. To minimize this risk, you should enable both checking options in the Advanced menu, although this will slow down the test by a few percent. If possible, it'd be best to run the test on a system with ECC (error correcting, usually only found in servers) memory, but I doubt that's an option. If the machine is multi-core, you can run the test on more than one thread to speed it up (see Test > Worker Windows > CPUs to use). It's not perfectly efficient, e.g. it won't be quite 4x as fast on four cores of a quad-core, but it could reduce the test time from well over a year to under a year. |

You machine will TF to 77 [B]after[/B] the P-1 is finished, by default. Then the LL will start.
What Mini-Geek said is correct in that if you want to speed the test compared to the wall clock, then a multicore is the way to go. If you want to be most efficient then there are other things to do. Remember, if you do things in parallel (lower TF on 1 machine, P-1 on another, ECM on a 3rd, high TF on a 4th and LL on a 5th), if you find a factor below 76 (for example), then all the other effort is wasted. |

As further incentive for pursuing this bold undertaking,
after TF and P-1, the chance of it being prime is ~ 1 in 3 Million:smile: David |

[QUOTE=Mini-Geek;244195]If your goal is to expect to finish the number in the shortest amount of CPU time, (in other words, most efficiently for the systems you have) you should only run the Prime95 default settings, regardless of how many systems are available.[/QUOTE]
Moreover you should not parallelise any of the factorisation methods with each other, or with the LL, so basically you are talking about using only one machine at a time, (though it may make sense, however, to do P-1 stage 2 on a slower machine with more memory.) [QUOTE]If your goal is to expect to finish the number in the shortest calendar time, you should run as much TF, P-1, and maybe even ECM as you can on as many systems are available, while doing the LL test with as many cores as possible on the fastest hardware available.[/QUOTE] In the extreme, you could continue factorisation efforts throughout the entire duration of the LL, though the returns from so doing would diminish rapidly. Specifically you would have an ever reducing probability of finding a factor which would allow you to abort the ever reducing tail end of the LL. A good compromise might be: 1, Start LL imediately using all cores of your fastest machine. 2, in parallel, use a CUDA compatible GPU to FT to several bits deeper than normal. If you don't have one, there may be people willing to do this for you. 3, again in parallel, do P-1 to "normal" limits, doing stage two on a machine with as much memory as possible. 500MB is recommeded as a "good" amount for normal sized exponents, so probably about 4GB would be good for an exponent this size, but I don't really know. If the LL machine has significantly more memory than any other, and if no other has 4GB, then consider temporarily shifting LL work from that machine to another. I'm not sure how worthwhile ECM would be. In general, its more expensive than P-1 and less effective as it can't take advantage the 2kp+1 from of Mersenne factors. Like P-1 it wants gobs of memory for stage two, though I don't know how much, compared with P-1. |

ThanksI really appreciate all of your responses. I almost made a new avatar with my cheeks red with embarrassment regarding my "how many bits long question". :redface:
Well, I think I am coming to the conclusion that dedicating an entire system to this 100MD LL assignment for at least year would not be a good use of CPU cycles. I am going to finish p-1 factoring (completing 1.5% a day with only 1 core - going to bump that up to two cores today) and then I will unassign the LL test. Thanks for the insight regarding GPU factorization; I've been following that thread closely on Primegrid; I just need to get my hands on an NVIDIA graphics card. Neo |

[QUOTE=Neo;244340]Well, I think I am coming to the conclusion that dedicating an entire system to this 100MD LL assignment for at least year would not be a good use of CPU cycles. I am going to finish p-1 factoring (completing 1.5% a day with only 1 core - going to bump that up to two cores today) and then I will unassign the LL test.[/QUOTE]
If your goal is to make best use of CPU cycles, then I suggest you consider leaving the P-1 to just one core. The conventional wisdom, which I've no doubt is correct, is that giving your cores independent assignments is more efficient overall. Sure, you won't be done with this assignment for a longer time, but it's not like there is anyone waiting for it. Also, if your goal is to make the biggest contribution you can to GIMPS progress with your available hardware, then consider devoting at least one core to P-1 on each machine that you can grant enough memory to do a good stage 2. (about 500MB). With all worktypes, if you don't do it, someone else will. But with P-1 the likelihood is that the someone else won't do it nearly as well as you. Roughly half of all exponents never get a P-1 stage 2, and many that do, do so on machines that can devote relatively little memory to the task. I have other suggestions, but I won't bombard you with them, especially as I don't know whether this kind of optimisation interests you. |

All times are UTC. The time now is 20:24. |

Powered by vBulletin® Version 3.8.11

Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.