View Single Post
Old 2009-12-05, 00:10   #70
philmoore
 
philmoore's Avatar
 
"Phil"
Sep 2002
Tracktown, U.S.A.

22×32×31 Posts
Default Double checking

The double checking data is in, so I would like to provide a summary of what we know about the accuracy of our data so far. Before getting in to the data, I want to first mention that 11 first-time checks (out of 38,000 or so) have been returned so far with bits set in the error-reporting word. Some errors (SUMOUT) tend to be less disruptive than others (ROUND-OFF, or SUMINP!=SUMOUT), and the program can often recover by restarting from the last save file. Of these 11 residues reported with errors, we have confirmed that 8 of them were correct residues and 3 were incorrect, with a triple-check needed to confirm the incorrect residues. There were also two file reading errors caused by simultaneous testing of two numbers with the same exponent but different k values. I believe this is because Prime95 uses the same naming convention for both save files, but a double check has confirmed that both residues were correct.

All other residues were reported without detected errors. I randomly chose one prp test in each 10k range to retest. The k values were all either 40291 or 41693. (If I had known we would find another prp so soon, I would have postponed this project!) From 1.25M to 5.01M, this gave a total of 376 prp tests to redo. To these, I added an additional 16 tests which were in the vicinity of reported errors on the theory that there may have been unreported errors (particularly ROUND-OFF errors) near the same time and from the same machines as the reported errors, for a total of 392 prp tests.

From the first 294 tests, there was only one discrepancy between the first time test and the double-check. The discrepancy has been confirmed via a triple check as an error in the first time test. The machine was the same quad of Jeff's that had returned one of the three bad residues with a reported error. Considering that this machine had only returned about 350 tests in all, very few of those residues were double-checked, and I was concerned that it might have a high error rate. Engracio volunteered to double check another 32 residues from that machine, and he confirmed that 31 of the residues were ok, and one was wrong (confirmed by my triple-check.) So we have about a 6% error rate (2 out of 33, not including the ROUND-OFF error result), although I would not be surprised if the true error rate on this machine is anywhere between 2% and 12%. My suggestion is, let's double-check the 105 remaining residues from this machine in the 40291 sequence, and just forget about 41693 and 2131. The chances are small that one of them is a prp, but I would feel rather silly if they finally got checked a couple of years from now and a prp turned up! Anyone want to volunteer? All 105 tests are between 3.05M and 3.92M.

So I was hoping for a low error rate, but I got paleseptember's double check file last night and found two more discrepancies with the first time checks which were done by engracio. I am triple checking the first one (on a very slow machine) to find out which results were in error. (Anyone with a faster machine want to test 2^4007127+41693 for me? Otherwise, I will be done late next week.) Still an overall error rate of < 1% (3 out of 394 or less). But we can expect the error rate to grow as the exponents get larger. Maybe we should periodically do more of this sort of sample double checking, and perhaps we will even be lucky enough to identify machines that have higher than usual error rates. Ben and I have done a lot of work double checking exponents < 1.25M, but we have not found even a single error yet, so I'm not sure that more systematic checking of the low exponents has a very good payoff.

So I think the error rate is low enough that we can concentrate on first-time tests for awhile. What about sieving? We are currently sieving from 2M to 50M, so any factors found < 5.2M are only benefitting future double-checking work. Should we drop 2M-5M from our sieving range? For the record, sieving speed is proportional to the square-root of the range size, so dropping 2-5M would only speed up our sieving by a bit over 3%. Maybe we should sieve a bit farther before raising the lower limit on sieving. Opinions?
philmoore is offline   Reply With Quote