View Single Post
Old 2011-01-01, 03:11   #2
Account Deleted
Mini-Geek's Avatar
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

10000101010112 Posts

Originally Posted by Neo View Post
1) Technically speaking, how many bits is M332208529? (i.e., how far can I trial factor this despite the fact that Prime95 has determined that 76 bits is far enough?);
M332208529 is 332208529 bits long, since Mersenne numbers are defined as 2^p-1, which in binary is always p bits long (just like 10^n-1 is always n digits long). The maximum that you might have to run trial factoring is the square root of the number, which is 332208529/2, or 166104264.5 bits. Each successive bit takes about twice as long as the one before it, or in other words about the time of all lower bit levels put together. I hope you can see that this is not possible in a universe time-scale, let alone practical. TF is not very useful except at the beginning, and only it and P-1 are useful as prefactoring before primality tests. Other tests like P+1, ECM, and NFS come in if you intend to fully factor a number, but it is FAR harder to fully factor than to test for primality (e.g. the smallest not-fully-factored Mersenne number is M929, the smallest not-primality-tested number is M37591483).
Originally Posted by Neo View Post
2) I have multiple systems available for running Prime95;

a) On a different system, should I trial factor this assignment manually to 77 or 78 bits in an effort to better increase the odds that I am not wasting A LOT of time on the primary LL assignment?

b) On a different system, and after the P-1 test is complete and I get the B1 bound result, should I run some ECM curve tests at the same time? If so, how many curves?
If your goal is to expect to finish the number in the shortest amount of CPU time, (in other words, most efficiently for the systems you have) you should only run the Prime95 default settings, regardless of how many systems are available.
If your goal is to expect to finish the number in the shortest calendar time, you should run as much TF, P-1, and maybe even ECM as you can on as many systems are available, while doing the LL test with as many cores as possible on the fastest hardware available.
I recommend, and all normal assumptions are based on, the former (most efficient use of CPU time), or at least much closer to that than the latter. Your other systems' times would be much better doing work on numbers besides this one (whether 100M digits or something else, GIMPS or not). But if it's this or nothing, this is better than nothing.

This can change in favor of doing a few more bits of TF if you have a fast CUDA-capable (which requires that it be NVidia) GPU available to run TF on, whether that's on the LLing system or another. See (look near the end for the latest code) for a CUDA-based factoring app for Mersenne numbers.

You should be aware that an LL test on a 100 million digit number will take a very long time, (look at Test > Status or Advanced > Time for an estimate on your computer) and that there's a somewhat high chance of an error some time during that, which would invalidate the result. To minimize this risk, you should enable both checking options in the Advanced menu, although this will slow down the test by a few percent. If possible, it'd be best to run the test on a system with ECC (error correcting, usually only found in servers) memory, but I doubt that's an option. If the machine is multi-core, you can run the test on more than one thread to speed it up (see Test > Worker Windows > CPUs to use). It's not perfectly efficient, e.g. it won't be quite 4x as fast on four cores of a quad-core, but it could reduce the test time from well over a year to under a year.

Last fiddled with by Mini-Geek on 2011-01-01 at 03:29
Mini-Geek is offline   Reply With Quote