mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Math (https://www.mersenneforum.org/forumdisplay.php?f=8)
-   -   Bayes' Theorem Question (https://www.mersenneforum.org/showthread.php?t=11942)

flouran 2009-05-25 16:55

Bayes' Theorem Question
 
Let's say I run 1 iteration of a probabilistic primality test called Test1, followed by another iteration of a probabilistic primality test called Test2. The probability that a number dubbed as "probable prime" by Test 1 is composite will be denoted by Test1(n). The probability that a number dubbed as "probable prime" by Test 2 is composite will be denoted by Test2(n). What is the probability then that a number dubbed as "probable prime" is composite when running 1 iteration of Test1 followed by 1 iteration in Test2?

I know this has to require Bayes' Law, so I did the following:
P(A) = a-priori 'probability' that n is a composite number that passed 1 iteration in Test1. This is Test1(n).
P(B) = a-priori 'probability' that n is prime. By the PNT, it is [TEX]O\left(\frac{1}{\log n}\right)[/TEX].
P(E|A) = 'probability' that a composite passes 1 iterations in Test2. This is Test2(n).
P(E|B) = 'probability' that a prime passes Test2. This is 1.
Thus, P(A|E), the posterior probability that a number which is dubbed "probable prime" by EQFT+OPQFT is composite is,
[TEX]P(A|E) = \frac{P(E|A)P(A)}{P(E)} = \frac{P(E|A)P(A)}{P(E|A)P(A)+P(E|B)P(B)} = \frac{Test1(n) \cdot Test2(n)}{{Test1(n) \cdot Test2(n)}+{O\left(\frac{1}{\log n}\right)}}[/TEX].

However, I think I may be wrong. Any takers?

axn 2009-05-25 17:14

Assuming Test1(n) and Test2(n) (the probabilities) are independent, wouldn't the answer just be Test1(n)*Test2(n)? Why would you need to apply Bayes' Theorem?

flouran 2009-05-25 19:05

[QUOTE=axn;174773]Assuming Test1(n) and Test2(n) (the probabilities) are independent, wouldn't the answer just be Test1(n)*Test2(n)? Why would you need to apply Bayes' Theorem?[/QUOTE]
That won't work b/c (according to Dr. Silverman):
(1) Once you have done one round of Test1, you
remove a lot of possible false positives for both
Test2 and for Test1. So the probability
changes for subsequent rounds (that is, *if* I ever decided to run multiple iterations).

(2) You need to also apply Bayes's Theorem to
account for the a-priori probability of your
candidate being prime in the first place.

(3) Test1 and Test2 are *not* independent because passing 1 test makes it more likely to pass the second test, even if the candidate is composite.

flouran 2009-05-25 20:33

[QUOTE=flouran;174771]
Thus, P(A|E), the posterior probability that a number which is dubbed "probable prime" by EQFT+OPQFT is composite is,
[TEX]P(A|E) = \frac{P(E|A)P(A)}{P(E)} = \frac{P(E|A)P(A)}{P(E|A)P(A)+P(E|B)P(B)} = \frac{Test1(n) \cdot Test2(n)}{{Test1(n) \cdot Test2(n)}+{O\left(\frac{1}{\log n}\right)}}[/TEX].
[/QUOTE]
I know that this is an unreliable source, but I solved for P(A|E) using a similar method employed [URL="http://en.wikipedia.org/wiki/Bayes_Theorem#Example_1:_Drug_testing"]here[/URL]. Perhaps this may help someone to see where I went wrong.

axn 2009-05-26 04:17

[QUOTE=flouran;174783]That won't work b/c (according to Dr. Silverman):
(1) Once you have done one round of Test1, you
remove a lot of possible false positives for both
Test2 and for Test1. So the probability
changes for subsequent rounds (that is, *if* I ever decided to run multiple iterations).
[/quote]
Which is why you're multiplying the two probabilities.

[QUOTE=flouran;174783](2) You need to also apply Bayes's Theorem to
account for the a-priori probability of your
candidate being prime in the first place.[/quote]
That is already accounted for by your definition of Test1(n) and Test2(n).

[QUOTE=flouran;174783](3) Test1 and Test2 are *not* independent because passing 1 test makes it more likely to pass the second test, even if the candidate is composite.[/QUOTE]
Again... That's why you multiply the two probabilities together.

Just to remind you:
[QUOTE=flouran]The probability that a number dubbed as "probable prime" by Test 1 is composite will be denoted by Test1(n). The probability that a number dubbed as "probable prime" by Test 2 is composite will be denoted by Test2(n)[/quote]

flouran 2009-05-26 04:23

[QUOTE=axn;174823]
Again... That's why you multiply the two probabilities together.
[/QUOTE]
No. If the probabilities were independent, then I would multiply them together. But they are not. Hence, it would be naive to simply multiply them together.

CRGreathouse 2009-05-26 05:52

Flouran... I don't know how to say this. You aren't even close to using Bayes' Law correctly, and that's why you're getting such varied comments.

First, you need to define your events clearly. This will help stop you from using them wrongly. Since you define events only indirectly through conditional probabilities it's easier to make mistakes. I suggest
A: n passes Test1
B: n passes Test2
p: n is prime

P(A|p) = 1
P(B|p) = 1
P(A|¬p) = x = x(n)
P(B|¬p) = y = y(n)

It would appear that you're looking for P(¬p|A∧B).

But the real tipoff should have been the fact that you never accounted for the interaction between the two tests. You somehow convinced yourself that using the probability that a number is prime could somehow tell you how the tests interact. It won't -- that can be used to tell you something about how the tests fare individually on composites vs. primes, but not how the tests interact.

Here's an explicit counterexample (though not fully worked). Choose Test1 as "perform a pseudoprime test to a random base coprime to n" and Test2 as "perform a pseudoprime test to all bases coprime to n". Then
P(A) = P(B) = 1/log n + O(1/log^2 n)
P(A∧B|¬p) = P(B|¬p) = y
P(A∧B|p) = P(B|p) = 1
P(¬p|A∧B) = P(¬p|B) = P(B|¬p) P(¬p) / P(B) = y(1 - O(1/log n))/P(B) = y log n + O(stuff) ≠ P(A)P(B) / (P(A)P(B) + O(1/log n)).

flouran 2009-05-26 05:58

I am sorry, CRGreathouse, but I do not quite understand your notation. Does ¬ mean "not"? What does ∧ mean?

CRGreathouse 2009-05-26 06:33

It's just standard notation. ¬ is not, ∧ is and, ∨ is or, ⊤ is truth ("tautology"), ⊥ is false ("contradiction"), ↔ is iff ("biconditional"), → is implication ("conditional").

R.D. Silverman 2009-05-26 10:36

[QUOTE=CRGreathouse;174836]It's just standard notation. ¬ is not, ∧ is and, ∨ is or, ⊤ is truth ("tautology"), ⊥ is false ("contradiction"), ↔ is iff ("biconditional"), → is implication ("conditional").[/QUOTE]

May I suggest that reading the Pomerance & Kim paper would be a
good idea?

CRGreathouse 2009-05-26 13:49

[QUOTE=R.D. Silverman;174849]May I suggest that reading the Pomerance & Kim paper would be a
good idea?[/QUOTE]

I've read it (and the one by Damgård, Landrock & Pomerance). I was addressing only the correctness of flouran's result.


All times are UTC. The time now is 22:34.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.