![]() |
Bayes' Theorem Question
Let's say I run 1 iteration of a probabilistic primality test called Test1, followed by another iteration of a probabilistic primality test called Test2. The probability that a number dubbed as "probable prime" by Test 1 is composite will be denoted by Test1(n). The probability that a number dubbed as "probable prime" by Test 2 is composite will be denoted by Test2(n). What is the probability then that a number dubbed as "probable prime" is composite when running 1 iteration of Test1 followed by 1 iteration in Test2?
I know this has to require Bayes' Law, so I did the following: P(A) = a-priori 'probability' that n is a composite number that passed 1 iteration in Test1. This is Test1(n). P(B) = a-priori 'probability' that n is prime. By the PNT, it is [TEX]O\left(\frac{1}{\log n}\right)[/TEX]. P(E|A) = 'probability' that a composite passes 1 iterations in Test2. This is Test2(n). P(E|B) = 'probability' that a prime passes Test2. This is 1. Thus, P(A|E), the posterior probability that a number which is dubbed "probable prime" by EQFT+OPQFT is composite is, [TEX]P(A|E) = \frac{P(E|A)P(A)}{P(E)} = \frac{P(E|A)P(A)}{P(E|A)P(A)+P(E|B)P(B)} = \frac{Test1(n) \cdot Test2(n)}{{Test1(n) \cdot Test2(n)}+{O\left(\frac{1}{\log n}\right)}}[/TEX]. However, I think I may be wrong. Any takers? |
Assuming Test1(n) and Test2(n) (the probabilities) are independent, wouldn't the answer just be Test1(n)*Test2(n)? Why would you need to apply Bayes' Theorem?
|
[QUOTE=axn;174773]Assuming Test1(n) and Test2(n) (the probabilities) are independent, wouldn't the answer just be Test1(n)*Test2(n)? Why would you need to apply Bayes' Theorem?[/QUOTE]
That won't work b/c (according to Dr. Silverman): (1) Once you have done one round of Test1, you remove a lot of possible false positives for both Test2 and for Test1. So the probability changes for subsequent rounds (that is, *if* I ever decided to run multiple iterations). (2) You need to also apply Bayes's Theorem to account for the a-priori probability of your candidate being prime in the first place. (3) Test1 and Test2 are *not* independent because passing 1 test makes it more likely to pass the second test, even if the candidate is composite. |
[QUOTE=flouran;174771]
Thus, P(A|E), the posterior probability that a number which is dubbed "probable prime" by EQFT+OPQFT is composite is, [TEX]P(A|E) = \frac{P(E|A)P(A)}{P(E)} = \frac{P(E|A)P(A)}{P(E|A)P(A)+P(E|B)P(B)} = \frac{Test1(n) \cdot Test2(n)}{{Test1(n) \cdot Test2(n)}+{O\left(\frac{1}{\log n}\right)}}[/TEX]. [/QUOTE] I know that this is an unreliable source, but I solved for P(A|E) using a similar method employed [URL="http://en.wikipedia.org/wiki/Bayes_Theorem#Example_1:_Drug_testing"]here[/URL]. Perhaps this may help someone to see where I went wrong. |
[QUOTE=flouran;174783]That won't work b/c (according to Dr. Silverman):
(1) Once you have done one round of Test1, you remove a lot of possible false positives for both Test2 and for Test1. So the probability changes for subsequent rounds (that is, *if* I ever decided to run multiple iterations). [/quote] Which is why you're multiplying the two probabilities. [QUOTE=flouran;174783](2) You need to also apply Bayes's Theorem to account for the a-priori probability of your candidate being prime in the first place.[/quote] That is already accounted for by your definition of Test1(n) and Test2(n). [QUOTE=flouran;174783](3) Test1 and Test2 are *not* independent because passing 1 test makes it more likely to pass the second test, even if the candidate is composite.[/QUOTE] Again... That's why you multiply the two probabilities together. Just to remind you: [QUOTE=flouran]The probability that a number dubbed as "probable prime" by Test 1 is composite will be denoted by Test1(n). The probability that a number dubbed as "probable prime" by Test 2 is composite will be denoted by Test2(n)[/quote] |
[QUOTE=axn;174823]
Again... That's why you multiply the two probabilities together. [/QUOTE] No. If the probabilities were independent, then I would multiply them together. But they are not. Hence, it would be naive to simply multiply them together. |
Flouran... I don't know how to say this. You aren't even close to using Bayes' Law correctly, and that's why you're getting such varied comments.
First, you need to define your events clearly. This will help stop you from using them wrongly. Since you define events only indirectly through conditional probabilities it's easier to make mistakes. I suggest A: n passes Test1 B: n passes Test2 p: n is prime P(A|p) = 1 P(B|p) = 1 P(A|¬p) = x = x(n) P(B|¬p) = y = y(n) It would appear that you're looking for P(¬p|A∧B). But the real tipoff should have been the fact that you never accounted for the interaction between the two tests. You somehow convinced yourself that using the probability that a number is prime could somehow tell you how the tests interact. It won't -- that can be used to tell you something about how the tests fare individually on composites vs. primes, but not how the tests interact. Here's an explicit counterexample (though not fully worked). Choose Test1 as "perform a pseudoprime test to a random base coprime to n" and Test2 as "perform a pseudoprime test to all bases coprime to n". Then P(A) = P(B) = 1/log n + O(1/log^2 n) P(A∧B|¬p) = P(B|¬p) = y P(A∧B|p) = P(B|p) = 1 P(¬p|A∧B) = P(¬p|B) = P(B|¬p) P(¬p) / P(B) = y(1 - O(1/log n))/P(B) = y log n + O(stuff) ≠ P(A)P(B) / (P(A)P(B) + O(1/log n)). |
I am sorry, CRGreathouse, but I do not quite understand your notation. Does ¬ mean "not"? What does ∧ mean?
|
It's just standard notation. ¬ is not, ∧ is and, ∨ is or, ⊤ is truth ("tautology"), ⊥ is false ("contradiction"), ↔ is iff ("biconditional"), → is implication ("conditional").
|
[QUOTE=CRGreathouse;174836]It's just standard notation. ¬ is not, ∧ is and, ∨ is or, ⊤ is truth ("tautology"), ⊥ is false ("contradiction"), ↔ is iff ("biconditional"), → is implication ("conditional").[/QUOTE]
May I suggest that reading the Pomerance & Kim paper would be a good idea? |
[QUOTE=R.D. Silverman;174849]May I suggest that reading the Pomerance & Kim paper would be a
good idea?[/QUOTE] I've read it (and the one by Damgård, Landrock & Pomerance). I was addressing only the correctness of flouran's result. |
[QUOTE=CRGreathouse;174866]I've read it (and the one by Damgård, Landrock & Pomerance). I was addressing only the correctness of flouran's result.[/QUOTE]
Mea Culpa. I responded to the wrong post. I meant for flouran to read it. |
[QUOTE=R.D. Silverman;174849]May I suggest that reading the Pomerance & Kim paper would be a
good idea?[/QUOTE] You're referring to [U]The Probability that a Random Probable Prime is Composite[/U] by Kim and Pomerance? I've read part of it. In your paper, Dr. Silverman, entitled [U]Fast Generation of Random, Strong RSA Primes[/U], you refer to equation 1.6 of the Kim and Pomerance paper: "Use of the Miller-Rabin algorithm combined with a Lucas or Frobenius strong probable prime test [5]. According to a very recent paper of Grantham, if one uses a Frobenius probable prime test, the probability that a candidate is composite when the tests say `prime' is less than 1/1770, as opposed to the 1/4 one gets with Miller-Rabin alone. If one performs a single Miller-Rabin test, followed by T Frobenius tests, the probability of error for 512-bit primes is then less than 1.5*10^-17(1/1770)^T [Pomerance-Kim, equation 1.6]. To achieve a probability of 2^-100, T = 4 sufficies. One should be able to apply the analytical techniques of [3] to the Grantham algorithm to arrive at even stronger probability estimates. That is to say the number 1.5*10^-17 can probably be reduced, but the method is as yet too new for anyone to have done this analysis. However, the correctness of the 1.5 * 10^-17 bound is not in question." May I ask your reasons? |
[QUOTE=flouran;174896]You're referring to [U]The Probability that a Random Probable Prime is Composite[/U] by Kim and Pomerance? I've read part of it. In your paper, Dr. Silverman, entitled [U]Fast Generation of Random, Strong RSA Primes[/U], you refer to equation 1.6 of the Kim and Pomerance paper:
"Use of the Miller-Rabin algorithm combined with a Lucas or Frobenius strong probable prime test [5]. According to a very recent paper of Grantham, if one uses a Frobenius probable prime test, the probability that a candidate is composite when the tests say `prime' is less than 1/1770, as opposed to the 1/4 one gets with Miller-Rabin alone. If one performs a single Miller-Rabin test, followed by T Frobenius tests, the probability of error for 512-bit primes is then less than 1.5*10^-17(1/1770)^T [Pomerance-Kim, equation 1.6]. To achieve a probability of 2^-100, T = 4 sufficies. One should be able to apply the analytical techniques of [3] to the Grantham algorithm to arrive at even stronger probability estimates. That is to say the number 1.5*10^-17 can probably be reduced, but the method is as yet too new for anyone to have done this analysis. However, the correctness of the 1.5 * 10^-17 bound is not in question." May I ask your reasons?[/QUOTE] Thus, it seems to me that you just multiplied the two probabilities together, rather than using Bayes' Theorem. Am I right or simply deluded? |
| All times are UTC. The time now is 22:12. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.