20050625, 02:42  #1 
Jun 2005
Madison, Indiana, U.S.A.
3×7 Posts 
P1 Testing
I am currently running a potential 10^6 exponent. For some reason, P95 skipped the 2nd step of the P1 test. I was wondering why. Anyone?

20050625, 08:19  #2 
Feb 2005
Colorado
2×307 Posts 
I believe phase 2 of the P1 test will be skipped if you do not have enough memory allocated to Prime95. The amount of memory needed is related to the size of the exponent being tested. This is from the readme.txt file:
Exponent Minimum Reasonable Desirable     6000000 12MB 23MB 33MB 10000000 19MB 36MB 53MB 33000000 65MB 125MB 185MB Prime95 defaults to 8MB memory allocated. For the size of exponent you are testing, if you did not increase the amount of memory available to Prime95 to at least 65MB (under Options > CPU), then phase 2 of the P1 test will most likely be skipped. As I understand it, the more memory available to the P1 test, the better the chance of it finding a factor. I hope this helps... Last fiddled with by PhilF on 20050625 at 08:21 
20050625, 08:21  #3 
Jun 2005
Near Beetlegeuse
2^{2}×97 Posts 
Skipped P1 stage 2
I don't profess to be an expert in this, but several people have posted the same question before and they all got the same answer; it's a memory thing.
It takes more memory to run a P1 stage 2 than a P1 stage 1 (which I suppose is why there are two stages). When Prime95 got to stage 2 it just said "Oops, not enough memory for this right now, I'll just skip to the next step". Quite why there was not enough memory is something I prefer not to spend too much time thinking about. Maybe your browser was out to lunch, or your CPU was thinking about that cute chick that debugged it in the lab, who knows? The main thing is, it is not a problem, you can go back to sleep. If this is not a sufficiently technical answer for you, hang around long enough and one of the real experts will parse you something in geekspeak. 
20050625, 18:26  #4  
∂^{2}ω=0
Sep 2002
República de California
7×11×151 Posts 
Quote:
There is a mathematical difference in the way the 2 stages do the respective powerings of the p1 residue  stage 1 raises some initial small seed value (e.g. 3) to (exponent p) *(product of all small prime powers < B1), e.g. if B1 = 1000000, your stage 1 power might look like p*(2^20)^(3^15)*(5^11)*(7^9)*... (note each quantity in () is the smallest power of the corresponding small prime which is >= B1). To do this kind of powering via a standard binary powering ladder needs at most 2 copies of an LLtestsized residue, or just one if one precomputes the above product and uses leftright binary exponentiation. Note that with this kind of powering it is relatively expensive to increase B1 to large values, since a given large prime is less likely than a small one to be a factor of a given random composite, and at the same the larger primes contribute disproportionately to the above product, simply because there are so many more of them. The p1 method succeeds if (some factor of the number in question)  1 is a sufficiently smooth composite. The idea behind stage 2 is that since many composites have just a single outlying large prime factor, if we multiply (modulo the number we're trying to factor) our stage 1 residue R by successive values of R^(odd prime > B1) for odd primes <= a stage 2 bound B2, we can capture any composite which factors into C = (p1*p2*...*pn) * q, where all the p's are <= B1 and q <= B2. For numbers like Mersennes we are helped by the fact that the exponent p is guaranteed to be a factor of C, i.e. we always include it in our stage 1 power product, even if it's > B1, as it usually will be), which makes it much more likely that the other (a priori unknown) factors of C will satisfy the above smoothness property. Now the way we do the stage 2 powering allows us to process each stage 2 prime much faster than we could have done by including it in the stage 1 prime power product, but it also means that p1 will succeed only if C has at most ONE outlying factor among the stage 2 primes. To do an effective stage 2 also requires us to precompute and store some reasonable number of even powers of the stage 1 residue, e.g. R^2, R^4, R^6, etc. Each of these powers is used to "cover the gap" between successive stage 2 primes. More such powers is generally better, but we typically want at least the first 8 even powers (each of which occupies an LLtestresiduesized chunk of memory). At the high end, using evermore such powers leads to rapidly diminishing returns, since the fraction of prime gaps greater than some threshold E diminishes rapidly as E increases. So we want a reasonable minimum amount of memory, but don't really need scads more than this. 

20050626, 20:14  #5  
Jun 2005
Madison, Indiana, U.S.A.
3·7 Posts 
Quote:


Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Antipoverty drug testing vs "high" tax deduction testing  kladner  Soap Box  3  20161014 18:43 
What am I testing?  GARYP166  Information & Answers  9  20090218 22:41 
k=243 testing ??  gd_barnes  Riesel Prime Search  20  20071108 21:13 
Testing  grobie  Marin's Mersennearies  1  20060515 12:26 
Speed of P1 testing vs. Trial Factoring testing  eepiccolo  Math  6  20060328 20:53 