![]() |
|
|
#1 |
|
Sep 2002
Austin, TX
3×11×17 Posts |
I know GIMPS runs two LL tests on every exponent to ensure accuracy. This leads me to the questions:
Are discrepancies dispersed randomly among the participants? Because GIMPS tests every exponent at least twice, every machine is capable of creating an error (completely random and without warning). This would explain why there is no optimization in the list of exponents to DC. And/or Are discrepancies concentrated in bad machines? If this were the case, primenet would keep a, "credit report" on each machine. It would only verify the work of new machines, machines known to give flaky results, checkups on healthy machines results, ect... |
|
|
|
|
|
#2 | |
|
Mar 2005
Internet; Ukraine, Kiev
11×37 Posts |
Quote:
1. machine joins GIMPS 2. machine resturns a couple of good results (double-checked) 3. machine becomes trusted and it's results are done double-checked any more 4a. power supply (or anything else) failure. Motherboard (and/or CPU/RAM/etc) gets fried. A new power supply and motherboard are bought. One of new components is faulty. So, the machine starts to return bad results, which are not double-checked. 4b. owner overclocks the machine to the point where it becomes unstable. Bad results are returned, which are not double-checked. |
|
|
|
|
|
|
#3 |
|
∂2ω=0
Sep 2002
República de California
19·613 Posts |
The Primenet server does keep track of statistics indicating whether a given participating machine is "good" or "bad", but the data are not used to avoid double-checking - AFAIK the main current use is to gauge likelihood when an alleged new prime is found - if the result (like the most-recent one) comes from a machine which has a track record of returning good data (as measured by lack of problem-indicating error codes in the results lines returned and later successful double-checks of the machine's original results), George is more likely to make a tentative announcement of a new prime discovery (subject of course to validation.) If such a result comes from a knwon-to-be-flaky machine or a previously-unheard-of-machine, we tend to require things to get much further into an independent validation before saying anything about whether the result is likely to hold up.
|
|
|
|
|
|
#4 | |
|
P90 years forever!
Aug 2002
Yeehaw, FL
19·397 Posts |
Quote:
However, after you subtract out those machines the remaining errors are randomly distributed - not that I've done comprehensive statistical analysis. |
|
|
|
|
|
|
#5 | |
|
"Richard B. Woods"
Aug 2002
Wisconsin USA
22×3×641 Posts |
Quote:
For instance: "Error rate for LL tests" at http://www.mersenneforum.org/showthread.php?t=5311 "Early double-checking to determine error-prone machines? " at http://www.mersenneforum.org/showthread.php?t=1386 "Which exponents should be re-released for first time tests?" at http://www.mersenneforum.org/showthread.php?t=1201 |
|
|
|
|
|
|
#6 | |
|
"Richard B. Woods"
Aug 2002
Wisconsin USA
22·3·641 Posts |
Quote:
But the nagging eventually got to me. "Error rate for LL tests" at http://www.mersenneforum.org/showthread.php?t=1116 |
|
|
|
|