![]() |
Lucas-Lehmer double-check and probability
This might be a trivial problem. But questions in probability theory can be notoriously tricky...
Sorry if the question has been discussed before but I have not been able to find it on this forum. I have just been allocated a double-check by the PrimeNet server. At the point when I saw the mersenne number M as it was given to me I knew that it had been tested composite with the Lucas-Lehmer algorithm before. With that information I could have assigned a probability p that M is in fact prime. I then looked M up in the database of LL results on the PrimeNet server. I saw that M has in fact been tested [B]twice[/B] before, the final residue of the second test disagreeing with that of the first (both tests, obviously, still giving M as composite). Now that I have this extra information, is the probability that M is in fact prime any different? Yes, I know M is either prime or it isn't and you shouldn't talk about the probability that a number is prime. But I hope people understand what I am talking about - and feel free to correct my language to make the problem more precise. :smile: |
Okay, trying my best to avoid being pedantic about the "probability that Mp is prime" thing:
[spoiler]I would say that the probability is different, and is lower. If you had only one previous LL test then there is a some (unknown) chance that the residue is correct and thus a reciprocally related chance that your Mp is prime. Compare that to having two previous LL tests. There is a greater chance (than with one LL test) that at least one of the LL residues is correct, and thus a correspondingly lower chance that your Mp is prime.[/spoiler] |
[quote=retina;153276]...
[spoiler]Compare that to having two previous LL tests. There is a greater chance (than with one LL test) that at least one of the LL residues is correct[/spoiler]...[/quote] You may well be right. But could you perhaps motivate this statement more clearly for numbskulls like me to understand it better considering that [spoiler]we know that at least one of the tests is wrong.[/spoiler]? |
But, even if we know that one is bad, the probiblity that one of two is good is even better.
|
Of course you know more. Baye's Theorem tells you just how much more, but I find it more intuitive to convert to frequency and then back to probability. Let's assume that one of a million candidates is prime and one of a hundred tests is wrong. When you know there has been one previous test,
[spoiler] from 100 million candidates tested once, there would be 99,999,900 composites and 100 primes. The results will fall into 4 categories:[/spoiler] [code][spoiler] 98,999,901 Composites with a good test 999,999 Composites with a bad test 99 Primes with a good test 1 Prime with Bad test[/spoiler][/code][spoiler] You know your candidate isn't among the 99 because the residue would have been zero. So the probability this candidate is prime is 1/99,999,901. [/spoiler] To include the information from the second test, we consider a larger thought-experiment with 100 times as many candidates. I'm going to ignore the probability the two bad tests could have the same residue. It's easy to include, but it would have negligible impact. [spoiler] This is 10,000,000,000 candidates with 8 possible categories:[/spoiler] [code][spoiler] 9,800,990,199 composites with good first and good second test 98,999,901 composites with good first and bad second test 98,999,901 composites with bad first and good second test 999,999 composites with bad first and bad second test 9,801 primes with good first and good second test 99 primes with good first and bad second test 99 primes with bad first and good second test 1 prime with bad first and bad second test[/spoiler][/code][spoiler] You know your candidate isn't among the 9,800,990,199 because the residues would have matched. You know it isn't among the 9,801 nor either of the 99 because at least one residue would have been zero. Hence the probability your number is prime is 1/198,999,802. This is about half the probability after one test. How do we explain this? After the first test, there is a pool of really prime candidates and a pool of really composite candidates. After the second test, 1% of the pool of really prime candidates remain. But candidates in the "really composite" pool remain if EITHER of the tests is wrong - about 2%. So the total candidates shrunk by a factor of 50 while the prime pool shrunk by a factor of 100, resulting in this 2:1 ratio [/spoiler] |
[QUOTE=Brian-E;153345]You may well be right. But could you perhaps motivate this statement more clearly for numbskulls like me to understand it better considering that [spoiler]we know that at least one of the tests is wrong.[/spoiler]?[/QUOTE]Okay, yes my post was a bit rushed and I assumed a few things that should probably be properly explained.
[spoiler]If we say that the chance of any particular test giving a bad residue is q (0<q<1), then: For a single LL test we have chance of a good residue: 1 - q For two LL tests with different results then we get the following possibilities: Chance that both residues are bad: q[sup]2[/sup] Chance that at least one is good: 1 - (both bad) ---> 1 - q[sup]2[/sup] So we just compare: (1 - q) <---> (1 - q[sup]2[/sup]). The latter result will be larger, hence the chance that at least one residue is correct is higher from the two LL tests.[/spoiler] |
[QUOTE=retina;153541][spoiler]If we say that the chance of any particular test giving a bad residue is q (0<q<1), then:
For a single LL test we have chance of a good residue: 1 - q For two LL tests with different results then we get the following possibilities: Chance that both residues are bad: q[sup]2[/sup] Chance that at least one is good: 1 - (both bad) ---> 1 - q[sup]2[/sup] So we just compare: (1 - q) <---> (1 - q[sup]2[/sup]). The latter result will be larger, hence the chance that at least one residue is correct is higher from the two LL tests.[/spoiler][/QUOTE] This is incorrect. The relevent probability is not [spoiler]"Chance that at least one is good"[/spoiler], but [spoiler]"Chance that exactly one is good". It is not possible for both to be good. [i]A priori[/i] chance that both residues are bad: q[sup]2[/sup] [i]A priori[/i] chance that exactly one is good: 2q(1 - q) These probabilities don't add to unity, so they need to be normalised by dividing by their sum. In fact, we're only interested in the normalisation of the first of these probabilities Normalised chance that both residues are bad: q[sup]2[/sup] / ( q[sup]2[/sup] + 2q(1 - q) ) = q[sup]2[/sup] / ( 2q - q[sup]2[/sup] ) If q is small, then the denominator is approximately 2q, making this probability aproximately q/2.[/spoiler] In other words, the probability that both of two non-matching residues are incorrect is [spoiler]about half the probability that a single residue is incorrect, assuming that the latter probability is small[/spoiler]. |
I know this is the puzzles forum, but the spoilers are getting
fscking tedious. (Bo doubt some mod may bleep my language) David PS good problem |
[QUOTE=Mr. P-1;153610]This is incorrect. The relevent probability is not [spoiler]"Chance that at least one is good"[/spoiler], but [spoiler]"Chance that exactly one is good". It is not possible for both to be good.[/spoiler][/QUOTE]Yes, indeed, thanks for the correction.[QUOTE=davieddy;153641]I know this is the puzzles forum, but the spoilers are getting
fscking tedious.[/QUOTE]CTRL-A don't work for you then? |
| All times are UTC. The time now is 05:35. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.