View Single Post
Old 2003-09-27, 10:25   #3
GP2's Avatar
Sep 2003

22×3×5×43 Posts

First column is Meg range (for instance, 6 = 6,000,000 - 6,999,999).

Second column is the number of exponents
in that range for which at least one LL test was done, but not 2 matching LL tests, and with no P-1 factoring ever having been done for that exponent.

 0      0
 1      0
 2      0
 3      0
 4      0
 5      0
 6      0
 7      2
 8     79
 9   1507
10   9118
11   5972
12   4122
13   1880
14   1187
15   1062
16   1044
17   1054
18   1053
19   1069
How do we interpret these results?

At low ranges (0M - 7M), just about every exponent has been double-checked, so the numbers are zero.

The numbers then rise sharply, peaking at 10M (not sure why). Of course, many of the machines that perform the double-checks will have enough memory to do a P-1 trial-factoring before going ahead with the LL double-check. But judging by past history some won't, and some thousands of exponents will never get a P-1 test done.

From 15M-19M the numbers decline to a plateau. I'm not sure why. Maybe it's because only modern machines are fast enough to exponents in that range, and such machines are more likely to have plenty of memory (required for P-1 testing) and also more likely to have a recent version of Prime95 installed (since P-1 trial-factoring was only introduced in fairly recent version of Prime95).

If P-1 testing could be organized to get through the hump between 10M-13M, then after that it would be fairly easy to ensure that P-1 trial-factoring always kept ahead of the leading edge of double-checking (in the "plateau" region).
GP2 is offline   Reply With Quote