20090422, 17:22  #23  
Jul 2006
Calgary
5^{2}×17 Posts 
Quote:


20090423, 09:23  #24 
Aug 2002
Termonfeckin, IE
5163_{8} Posts 
Not at all. I get only v5 data. No v4 data.

20091024, 15:25  #25 
"Patrik Johansson"
Aug 2002
Uppsala, Sweden
1A8_{16} Posts 
Updated error rate plot
Here is the updated error rate plot for data until today, Oct 24, 2009 (although I downloaded the number of verified tests on Oct 21, but there should not be a big difference).
The plot is explained in the first post in this thread. Last fiddled with by patrik on 20091024 at 15:28 Reason: Added reference to first post. 
20091024, 19:36  #26 
Aug 2002
Termonfeckin, IE
A73_{16} Posts 
I think we can take 4% as a good approximation of the error rate then. And there does not seem to be significant variation from 5 million to 20 million. I think the spikes at 15 million and 17 million can be explained by the FFT boundaries. Note that George was using a less conservative FFT boundary in earlier versions of Prime95 that would have done the first check in these ranges.
Last fiddled with by garo on 20091024 at 19:38 
20091026, 12:23  #27 
Mar 2003
Melbourne
203_{16} Posts 
Bring on ECC for consumer based machines.
It's long overdue.  Craig 
20101226, 15:05  #28 
"Patrik Johansson"
Aug 2002
Uppsala, Sweden
2^{3}×53 Posts 
Error rate plot with zero error code
I just made an update to the error rate plot, and this time I also added information about errors with zero error code. Dividing the exponents into classes 50000 wide when summing the errors seems to give a reasonable tradeoff between resolution and statistical noise.
The red curve shows the error rate (number of bad tests divided by number of tests). The green curve estimates the error rate by also including data from unverified tests. (If two nonmatching tests have been done for an exponent, at least one of them must be bad, so I count this one.) The blue curve is the zero error code error rate (number of bad tests with zero error code divided by number of tests). Finally the violet curve estimates the zero error code error rate, by also using unverified tests. Estimating number of errors with zero error code from unverified tests is the tricky part. Since a good test can have a nonzero error code, and we don't know yet which one of the listed tests that is the good one, it is in principle impossible to know the zero error code error rate. I simply assume that the good test had zero error code. (E.g. one exponent has four unverified tests done, two with nonzero error code and two with zero error code. Then I assume three tests are bad and one good, and that one of the three bad tests had zero error code.) If anyone is interested, I uploaded the data files I retrieved from primenet to my web site, Nlucas_v.zip, Nhrf3.zip and Nbad.zip. (I did not retrieve verified < 18M again, so any unneeded triple check is absent. Also, they were retrieved during a few hours, so a few exponents may have moved during that time.) 
20111224, 17:36  #29 
"Patrik Johansson"
Aug 2002
Uppsala, Sweden
2^{3}×53 Posts 
Another update
I just made another update from files made from downloads earlier today. The data files I linked to in my previous post are now updated, and go up to 101M.
The statistics I use is found in this file and is made from the data files using this C program (in which you would have to look to get an explanation of what the different columns means). Then I make the plot using gnuplot. Code:
#! /bin/tcsh set START_HRF=501 set END_VERIF=575 set INPUT="error_rates_50k_zero.txt" gnuplot << EOD set terminal png set output "error_rates_50k_50M_zero_20111224.png" set title "Error rates" plot "<head $END_VERIF $INPUT" using 1:(\$2 / \$3) with line title "from bad and verified tests", \ "<tail +$START_HRF $INPUT" using 1:((\$2 + \$4)/(\$3 + \$5)) with line title "estimated also using unverified tests", \ "<head $END_VERIF $INPUT" using 1:(\$6 / \$3) with line title "with zero error code, from bad and verified tests", \ "<tail +$START_HRF $INPUT" using 1:((\$6 + \$7)/(\$3 + \$5)) with line title "with zero error code, estimated also using unverified tests" EOD 
20120509, 08:07  #30 
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 89<O<88
3·29·83 Posts 
o_0
Not exactly newthread worthy, but here's an expo that's into quadruplecheck territory with no reported error code yet. My laptop's not yet turned in a bad DC, but then again linded is pretty reliable as well. Cosmic ray type error? I hope spradlin finishes the test quickly.

20120511, 15:44  #31  
Just call me Henry
"David"
Sep 2007
Cambridge (GMT)
2^{4}·353 Posts 
Quote:
I am not at all sure about 4% being the error rate but even with 1% it is 1 in 10000 which would have probably happened a couple of times in GIMPS history. 

20120514, 02:29  #32 
Aug 2002
Dawn of the Dead
353_{8} Posts 
More likely is that the original and / or the doublechecks were done on two error prone machines. Such machines corrupt results even when there are no error codes.
Some years back garo, GP2 and myself did alot of work concerning error prone machines. We observed several instances of tests needing a quadruple check or higher. 
20120514, 02:35  #33  
Aug 2002
Dawn of the Dead
5×47 Posts 
I have data if anyone is interested. I had an error prone machine and one could doublecheck my exponents (the ones with zero error codes). If one were to do that and the doublecheck doesn't match it will clearly be my result which is bad.
Quote:
Last fiddled with by PageFault on 20120514 at 02:35 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
error rate and mitigation  ixfd64  Hardware  4  20110412 02:14 
EFF prize and error rate  S485122  PrimeNet  15  20090116 11:27 
A plot of Log2 (P) vs N for the Mersenne primes  GP2  Data  3  20031201 20:24 
What ( if tracked ) is the error rate for Trial Factoring  dsouza123  Data  6  20031023 22:26 
Error rate for LL tests  GP2  Data  5  20030915 23:34 