![]() |
Heuristics
[URL]http://primes.utm.edu/mersenne/heuristic.html[/URL]
Can someone explain why Caldwell "accounts for" all forbidden factors below 2p+1 (or 6p+1) and ignores all higher forbidden factors? David |
[QUOTE=davieddy;216955]Can someone explain why Caldwell "accounts for" all
forbidden factors below 2p+1 (or 6p+1) and ignores all higher forbidden factors?David[/QUOTE] It is not "below" vs. "higher". Since the form of a factor must be 2mk+1, all other forms are sieved out by the argument. No, wait, they are not. The primes left in can still form non-conforming candidates. Strange indeed. |
[QUOTE=davieddy;216955][URL]http://primes.utm.edu/mersenne/heuristic.html[/URL]
Can someone explain why Caldwell "accounts for" all forbidden factors below 2p+1 (or 6p+1) and ignores all higher forbidden factors? David[/QUOTE] On that page this is the first time that I read this argument. Yes it is totally broken, there are other problems with it: for example for q=2p+1 it is doing nothing so saying that by 1/q probability it will be a divisor of 2^p-1, but this is not true, for p<10^6 (p is prime) there are 3912 primes for that q=2p+1 is a divisor, but the expected number of such divisor using the heuristic would be: s=0.0;forprime(p=2,10^6,s+=1/(2*p+1)) gives s=1.35 and this value is very far from 3912. |
[QUOTE=R. Gerbicz;217490]for example for q=2p+1 it is doing nothing so saying that by 1/q probability it will be a divisor of 2^p-1, but this is not true, for p<10^6 (p is prime) there are 3912 primes for that q=2p+1 is a divisor, but the expected number of such divisor using the heuristic would be: s=0.0;forprime(p=2,10^6,s+=1/(2*p+1)) gives s=1.35 and this value is very far from 3912.[/QUOTE]
I do not really understand this. 1. Of course, summing 1/q for some q is not the same as counting those q (i.e. summing 1). 2. You cannot vary p for the relevant estimates, it is fixed as long as 2^p-1 is fixed. You would have to vary the multiplier k of q = 2kp+1. My guess is that the conjecture fits the numerical evidence, so it cannot be obviously wrong. It is quite likely that just the explanation needs to be fixed, and even more likely that we have not even fully understood yet what Caldwell actually wants to tell. Unfortunately, all the cited papers are behind paywalls. Time to move legs. The first one who gets, and mostly understands, Wagstaff's paper, please report here. The first page of Wagstaff's paper is offered as a teaser, however, and therein he announces discussion of a topic that is of independent interest and quite educative, I hope: Look up Mertens' theorem. Then think about doing trial division of a number x and figure out the probability that it survives all divisibility tests uncracked. Compare with the prime number theorem. Explain the 12.3% difference. (I have an idea about the reason why there is a difference, but I have not verified its quantitative aspects yet.) |
[QUOTE=ccorn;217522]
Look up Mertens' theorem. Then think about doing trial division of a number x and figure out the probability that it survives all divisibility tests uncracked. Compare with the prime number theorem. Explain the 13% difference. (I have an idea about the reason why there is a difference, but I have not verified its quantitative aspects yet.)[/QUOTE] I have not looked at the source for this discussion. But the last question is easy. Mertens' Theorem gives the correct probability that an integer x survives all primes, but ONLY for primes less than log(x). The reason for this is that the product of all primes less than k is about e^k. Once the product exceeds x, they ALL can not divide x. Thus, the probability that a given prime divides x is no longer INDEPENDENT of the probability that other primes divide x. They can not all divide x and they start to 'get in each other's way" |
[QUOTE=R.D. Silverman;217524]I have not looked at the source for this discussion. But the last question is
easy. Mertens' Theorem gives the correct probability that an integer x survives all primes, but ONLY for primes less than log(x). The reason for this is that the product of all primes less than k is about e^k. Once the product exceeds x, they ALL can not divide x. Thus, the probability that a given prime divides x is no longer INDEPENDENT of the probability that other primes divide x. They can not all divide x and they start to 'get in each other's way"[/QUOTE] Of course you would know :smile: (And yes, I thought along those lines. But I am astonished that the relative difference is so small then.) Caldwell's argument that is discussed here: 1. The Mersenne number M_p (with p prime) is known to be divisible only by numbers of the form 2kp+1, so we cannot assume that the probability of M_p being prime is about 1/log(M_p). Good. 2. Instead we would have to multiply the above "raw" estimate by 1/(1-1/q) for all primes q below 2p+1. Maybe. 3. Why? Caldwell says, because the primes below 2p+1 cannot divide M_p. That's where davieddy has put a question mark: There are many more primes above 2p+1 that do not fulfill the required congruence mod 2p, hence cannot divide M_p either. But those are left in. So that's not the correct reason. Caldwell also states that Wagstaff argues differently. Now the interest turns to the Wagstaff paper, of course. |
FYI, the first page of Wagstaff's article (which is the only page offered online) contains the note:
[QUOTE]Lenstra has objected that one is not entitled to take [the upper bound of the range of interest for the prime divisors] as large as sqrt(M_p) because similar reasoning leads to a contradiction with the prime number theorem. We discuss Lenstra's objection.[/QUOTE] From that I have guessed that it is about the above discrepancy between naive application of Mertens' formula and the prime number theorem, but with a congruence requirement on the primes enforced at both sides. I am curious whether I have guessed right. |
Wagstaff's complete paper is available online [url=http://www.ams.org/journals/mcom/1983-40-161/S0025-5718-1983-0679454-X/S0025-5718-1983-0679454-X.pdf]here[/url].
And yes, Lenstra's complaint is as expected, see page 387. The ensuing "discussion" gets irritating. Instead of changing the factor in Mertens' formula to account for the dependence, the author changes the exponent used for the upper limit of his mental trial division. Which means, he considers trial division up to M_p^(exp(-EulerGamma)) instead of sqrt(M_p). The result fits the evidence, but the reasoning is even more disappointing to me than what we have seen before. At the first glance, the reasoning about which range to use for the product of the (1-1/q) seems to be solid though. Going to take a closer look. Edit: In particular, Wagstaff does the trial division by stepping k and using q=2kp+1 as the only candidates. That is, he does not use other primes for q, in contrast to Caldwell's exposition. Edit: The above is from page 387, where q is no longer explicit in the equations (it has been substituted in favor of k). Prior to that, Wagstaff manipulates some terms containing q, for all q from a set of small odd primes. That may have been the scheme that Caldwell has tried to simplify and re-interpret for presentation purposes. However, that part by Wagstaff seems robust. My impression is that Chris Caldwell's explanation started as an attempt to better justify the switch to EulerGamma than Wagstaff did, and while attempting that, Caldwell broke Wagstaff's correct q-selection scheme. Davieddy, you have uncovered a mess. |
[QUOTE=ccorn;217538]Instead of changing the factor in Mertens' formula to account for the dependence, the author changes the exponent used for the upper limit of his mental trial division. Which means, he considers trial division up to M_p^(exp(-EulerGamma)) instead of sqrt(M_p). The result fits the evidence, but the reasoning is even more disappointing to me than what we have seen before.[/QUOTE]
Wagstaff cites Pólya and Hardy & Wright for that step. Will have to look into that. Caldwell also cites Pomerance and Ribenboim. Add Lenstra. There should be reasonable papers on the distribution of Mersenne number divisors on this planet. They just don't stand the internet's visibility contest. :sad: |
Many many thanks for recognizing that I had a real
problem here! As a number theorist, I make a bloody good physicist. Or should that be the other way round? David |
[QUOTE=ccorn;217538]And yes, Lenstra's complaint is as expected, see page 387. The ensuing "discussion" gets irritating. Instead of changing the factor in Mertens' formula to account for the dependence, the author changes the exponent used for the upper limit of his mental trial division. Which means, he considers trial division up to M_p^(exp(-EulerGamma)) instead of sqrt(M_p). The result fits the evidence, but the reasoning is even more disappointing to me than what we have seen before.[/QUOTE]
I have calmed down and read Wagstaff's derivation of the asymptotics in full. Actually, he has done it sincerely: In equation (4) of the Wagstaff paper, there is a product of (1-g(k)) over a range of k to estimate the probability that M_p is prime. (I have abbreviated the expression as g(k), do not expect to find the symbol g in Wagstaff's paper.) The range of k is so that q = 2kp+1 (more generally, q = akp+1 with a being 2 or 6 depending on p) covers all possible prime divisors up to sqrt(M_p). Now Wagstaff brings Lenstra's objection, and he actually appreciates it. As RDS has pointed out, divisibility by a candidate prime is not completely independent from the divisibility by other primes. As a result, the product over the (1-g(k)) needs to be corrected. How can one do that? In the original Mertens vs. PNT case where the probability of a large integer x being prime is calculated, one can multiply the product over (1-1/q) with a factor to get the correct right hand side. Formally, the factor is equivalent to a product of (1-1/q) where q runs over primes from sqrt(x) up to x^(exp(-EulerGamma)). This seems weird of course---no one would want to do trial divison exceeding sqrt(M_p)---but for a range of q that large, you simply cannot apply Mertens' formula without adaptation. Wagstaff now deals with the more complicated factors 1-g(k) instead of 1-1/q. His set of q values is much less dense than in the Mertens vs. PNT case. But the problem is the same: Divisibility by any q is not completely independent from divisibility by any other candidate q. A correction is needed. Wagstaff cannot simply use the numerical value of the correction factor from the simple Mertens' vs. PNT case because his set of q is different, and his factors have a different form. But he can (hopefully) use the same approach: build a correction factor by extending the product up to q values in the order of M_p^(exp(-EulerGamma)). Yes, that looks weird again, but without such correction, the divisibility dependencies would not have been taken into account. After all, I can recommend Wagstaff's paper. |
| All times are UTC. The time now is 14:09. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.