![]() |
Less GHz days for larger exponents in TF?
Hi,
can someone please point me to how exactly the GHz days are calculated? I have the (for me) very strange observation: [SIZE=2]Manual testing[/SIZE][SIZE=2]333001871[/SIZE][SIZE=2]NF[/SIZE][SIZE=2]2010-11-11 13:17[/SIZE][SIZE=2]0.0[/SIZE][SIZE=2]no factor from 2^66 to 2^67[/SIZE][SIZE=2] 0.0449[/SIZE] [SIZE=2]Manual testing[/SIZE][SIZE=2]87643753[/SIZE][SIZE=2]NF[/SIZE][SIZE=2]2010-10-29 14:15[/SIZE][SIZE=2]0.1[/SIZE][SIZE=2]no factor from 2^66 to 2^67[/SIZE][SIZE=2] 0.1705[/SIZE] This means, for the same range (2^66 to 2^67) an exponent that is 4 times as large only gets you a quarter of the score? It's the same machine, maybe on a different clock speed ... :confused: Thanks for some clarification ... |
Factors of a Mersenne number 2[sup]M[/sup]-1 are of the from 2*k*M+1. So when M is bigger there are less possible factors for a given bit level.
The formula for factoring credit is something like : Constant * 2,4 * 1860 (2 ^ ("HowFarFactored" - 48) - 2 ^ ("HowFarPreviouslyFactored" - 48)) / M You see that with M four times bigger the credit is four times less. See the Maths pages on the Gimps site and the Mersenne Wiki. Jacob |
Adding to Jacob“s excellent explanation, also note that the test of the larger exponent has taken roughly 4 times less to complete, due the fact he pointed out. That means the credit per unit of work actually done is the same on both cases.
|
Those systems that I have set to factoring only are taking assignments in the 70 to 80 million range at depths of 2^(66-70). Have we reached anywhere so far a frontier? To factor so far ahead, is it not like prospecting for gold far in the desert all by one's self?
|
[QUOTE=imwithid;236706]Those systems that I have set to factoring only are taking assignments in the 70 to 80 million range at depths of 2^(66-70). Have we reached anywhere so far a frontier? To factor so far ahead, is it not like prospecting for gold far in the desert all by one's self?[/QUOTE]Some people like to do that sort of thing. :-)
See [URL]http://mersenneforum.org/forumdisplay.php?f=46[/URL] (But don't take "Lone Mersenne Hunters" too literally; they're mostly actually factor-hunting.) |
[QUOTE=S485122;236700]Factors of a Mersenne number 2[sup]M[/sup]-1 are of the from 2*k*M+1. So when M is bigger there are less possible factors for a given bit level.
[/QUOTE] And the exponent in binary (~30 bits) is only two bits longer. David |
Thanks a lot for the answers ... I've read that the factors are of the form 2*k*M+1 but did not think the next logical step that this is being used in the TF algorithm.
Regarding the 333M exponent: That was just out of curiosity, how much more effort it would be to find a 100 Million digit prime, compared to today's tests. And that range is worked on quite a bit, so I was not the only one ;-) I came to prime95 because of a performance issue with my laptop (that is now solved through TrottleStop). For the tests I randomly entered a few numbers "[SIZE=2]87643753" and only later joined PrimeNet and started understanding the math and algorithms part. And in the meantime I found a factor for my random number, so no prime :-/ [/SIZE] |
[QUOTE=Bdot;237124]Thanks a lot for the answers ... I've read that the factors are of the form 2*k*M+1 but did not think the next logical step that this is being used in the TF algorithm.
Regarding the 333M exponent: That was just out of curiosity, how much more effort it would be to find a 100 Million digit prime, compared to today's tests. [/QUOTE] Noone knows until one is found. We can say what happens [i]on average[/i] if certain conjectures (e.g. Wagstaff's) about the distribution of Mersenne primes is correct. When we find a Mersenne prime, M_p in time t, it will require time 8t (approximately) to find the next larger one. Thus, each successive one is approximately 8 times harder than the previous one. |
[QUOTE=R.D. Silverman;237205]Noone knows until one is found. We can say what happens [I]on average[/I]
if certain conjectures (e.g. Wagstaff's) about the distribution of Mersenne primes is correct. When we find a Mersenne prime, M_p in time t, it will require time 8t (approximately) to find the next larger one. Thus, each successive one is approximately 8 times harder than the previous one.[/QUOTE] I think Bob was thinking of 2[sup]3[/sup] = 8. The effort required goes as ~ exponent[sup]3[/sup] but IIRC, 1.47[sup]3[/sup] would be nearer the mark than 2[sup]3[/sup]. Either way, there is a fighting chance we won't find [B]any[/B] new Mersenne primes before 2025, let alone a 100M digit one. David In short, GIMPS has plundered the low-flying fruit. |
[QUOTE=Bdot;237124]Regarding the 333M exponent: That was just out of curiosity, how much more effort it would be to find a 100 Million digit prime, compared to today's tests.[SIZE=2]
[/SIZE][/QUOTE] My wording was inaccurate, sorry ... I meant "how much more effort it would be to [B]test for[/B] a 100 Million digit prime". Now I know that LL-testing an exponent in that range takes about 10 years on my hardware. Or ~3 years when throwing all 4 cores at it. [QUOTE=davieddy;236756]And the exponent in binary (~30 bits) is only two bits longer. David[/QUOTE] Does that mean the effort of a single test division (or FFT) correlates to the size of the exponent rather than the size of the Mersenne number itself (which would be 4 times the size instead of just 2 bits more)? Otherwise I'm not getting the point of this remark ... |
[QUOTE=Bdot;237441]Does that mean the effort of a single test division correlates to the size of the exponent rather than the size of the Mersenne number itself ...?[/QUOTE]
Yes, see [url]http://mersenne.org/various/math.php[/url] for the algorithm used to check if a number divides a Mersenne number. It does one step for each bit in the exponent, so the speed for each division would seem to be mainly related to the bit length of the exponent. [QUOTE=Bdot;237441](or FFT)[/QUOTE] The time it takes for one FFT-using, is mainly dependent on the FFT length used. Larger exponents require larger FFT lengths for their LLs. I don't know mathematically how exactly it's related, but from looking at a [URL="http://www.mersenne.org/report_benchmarks/"]benchmark[/URL], it looks like each time the exponent size approximately doubles (a bit less than double), the FFT length (exactly) doubles, and the time per iteration approximately doubles (a bit more than double). But it's not that simple because there are also considerations like that power of 2 FFTs (1024K, 2048K, etc.) run faster for their size than other FFTs (1792K, 2560K, etc.). Because of these two doublings, this of course means that when you double the size of the exponent, the number is about 4 times harder to run. |
| All times are UTC. The time now is 21:29. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.