![]() |
|
|
#56 |
|
"Patrik Johansson"
Aug 2002
Uppsala, Sweden
1A916 Posts |
I have updated the error rate plot again. There are no major changes. This time I downloaded all the verified LL tests again. (For 2010 and 2011 I didn't download the small verified exponents again, since they wouldn't change.)
|
|
|
|
|
|
#57 | |
|
"Victor de Hollander"
Aug 2011
the Netherlands
23·3·72 Posts |
Quote:
I only run TF on my GPUs so I'm very familiar with the LL testing phase .
|
|
|
|
|
|
|
#58 | |
|
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
722110 Posts |
Quote:
OC-ed machines tend to produce more errors, though I'm not sure if they produce more bad-errorless tests. The red line counts all bad tests, the blue line counts bad error tests without an error code, and so the difference is bad tests with an error code. Yes, DC is done to catch the bad tests with no error code. This can happen for a variety of reasons; perhaps an error occurred that Prime95 just didn't catch, or perhaps some memory was corrupted without the program knowing, etc. Yes, it is possible for a test with an error code to be correct. Prime95 is incredibly robust and can recover from most errors (the fewer there are, the more likely you are to have a decent result), but any and all errors are still counted in the error report. |
|
|
|
|
|
|
#59 | |
|
"Patrik Johansson"
Aug 2002
Uppsala, Sweden
1101010012 Posts |
Thanks to Luke Welsh (M29) for suggesting x- and y-axis labels, and other small changes, to make the plot more readable.
The server seemed slower this year than last year. Data requests that went fine last year often timed out (after 90 s) this year. Quote:
Therefore (in my opinion) you should lower the overclock even further (after getting no detectable errors) and then run at least 20 double-checks and look up your results. (After that, in case of a mismatch, run a triple-check.) |
|
|
|
|
|
|
#60 |
|
May 2013
East. Always East.
11·157 Posts |
Hah! I almost started up a reply before I saw that the original topic is ancient.
I skipped ahead to what ended up being this year's contribution to the discussion (one whole post) and I'm surprised to see the debate still hovering over the overclocking stuff. The error rate looks to be decreasing fairly dramatically. Is there a way to plot it versus time as opposed to versus the exponent? I can't say for sure but I think overclock is much more common these days than it was before when the error rates were higher. My machine is overclocked to give 20%-30% more throughput. I had it stable using 0.05 V less and 0.1 GHz more, stable being 24 hours of stress tests (this was back when I was doing this for temps as I had not joined GIMPS yet). Since then, I've had one reproducible rounding error (the only one I can find in my logs) and none of my DC's have been mismatches. My heavily overclocked machine falls well below the average error rate in that regard. |
|
|
|
|
|
#61 | ||
|
Sep 2006
Brussels, Belgium
2·3·281 Posts |
Quote:
Then the newer versions of Prime95 count fewer types of errors (see a discussion of this in the Software Forum.) I am not sure about the error counting of non Prime95 software (used on GPU for instance.) Quote:
Jacob |
||
|
|
|
|
|
#62 |
|
"Patrik Johansson"
Aug 2002
Uppsala, Sweden
42510 Posts |
Merry Christmas!
Enjoy this year's updated error rate plot. |
|
|
|
|
|
#63 |
|
Sep 2006
The Netherlands
36 Posts |
Good Afternoon,
Good to see an estimate. I'm looking for the measurement. What's the measured number of errors in double checks carried out for GIMPS on tests the past few years now that we have all these high clocked i7's and fast AVX LL? Regards, Vincent |
|
|
|
|
|
#64 | |
|
"Victor de Hollander"
Aug 2011
the Netherlands
23·3·72 Posts |
Quote:
I'd love to see plots of OCed computers vs. stock clock computers vs. server grade hardware, but it's not realistic to extract that from the data the server stores. |
|
|
|
|
|
|
#65 |
|
"Patrik Johansson"
Aug 2002
Uppsala, Sweden
52·17 Posts |
I have updated the error rate plot again.
Merry Christmas! |
|
|
|
|
|
#66 |
|
Sep 2006
The Netherlands
36 Posts |
Great!
Do i correctly interpret the red as that from all verified tests between about 2% to 4% in the 40M range gives an error? So (P( has error AND was verified test ) / P( all verified tests) ) = [2% ; 4%] Did i read this correct? If so - excuse my ignorance, how can the green be above the red in the estimate? |
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| error rate and mitigation | ixfd64 | Hardware | 4 | 2011-04-12 02:14 |
| EFF prize and error rate | S485122 | PrimeNet | 15 | 2009-01-16 11:27 |
| A plot of Log2 (P) vs N for the Mersenne primes | GP2 | Data | 3 | 2003-12-01 20:24 |
| What ( if tracked ) is the error rate for Trial Factoring | dsouza123 | Data | 6 | 2003-10-23 22:26 |
| Error rate for LL tests | GP2 | Data | 5 | 2003-09-15 23:34 |