![]() |
In physics experiments we speak of random and systematic
errors. Cosmic rays are random but rounding errors are systematic (rendered random via the random shift). In doublechecking, what we seek to eliminate is repetition of an error. I was thinking of "rounding errors" when I described using different (but equally reliable) hardware as a red herring David |
[QUOTE=davieddy;126460]In physics experiments we speak of random and systematic
errors. Cosmic rays are random but rounding errors are systematic (rendered random via the random shift). In doublechecking, what we seek to eliminate is repetition of an error. I was thinking of "rounding errors" when I described using different (but equally reliable) hardware as a red herring David[/QUOTE] How big or small is the propability of e.g. a roundoff error of 0.7 (I think such an error would be mistaken as a roundoff error of 0.3?)? |
[quote=davieddy;126460]Cosmic rays are random but rounding errors are systematic[/quote]But don't forget that some LL test errors could be the result of cosmic rays! Indeed, as circuitry gets smaller and smaller the chance that a cosmic ray could disrupt a bit gets larger.
[quote]In doublechecking, what we seek to eliminate is repetition of an error.[/quote]... and the error to be eliminated could be either systematic or random. |
[QUOTE]... and the error to be eliminated could be either systematic or random.[/QUOTE]
Precisely. From the point of view of the project it does not matter if the error is systemic or random once you accept that the software does not suffer from any errors. And we are almost as certain as we can be of that. One systemic error that can be identified is faulty hardware. There are heuristics that George uses for that but they are not perfect. You can search for old threads on this matter. We discussed this in length on the Data sub-forum about 5 years ago. |
Ah, I see. :smile: Thanks all of you!
|
[QUOTE=Andi47;126462]How big or small is the propability of e.g. a roundoff error of 0.7[/QUOTE]
Depends [almost] entirely on how close the exponent is to the upper limit for the given FFT-length, assuming the latter has been set perfectly, which of course is also not always the case. There are some heuristics that better-quantify these things in the F24 paper, but for purposes of general discussion, suffice it to say, that the probability of a catastrophic RO error rises very rapidly as one approaches the FFT cutoff - as a crude ballpark estimate, I would guess that around the cutoff a 1-2% increase in exponent roughly doubles the chance of a fatal ROE. [QUOTE] (I think such an error would be mistaken as a roundoff error of 0.3?)?[/QUOTE] Exactly, which is why one must be very careful to keep maximum ROE well below the fatal 0.5 level. However, due to the statistical behavior of these things, if one were to have a "silent but deadly" ROE > 0.5 on a given test [except perhaps for very small exponents, where less statistical randomization occurs in doing the FFT], it would be overwhelmingly likely to be accompanied by numerous ROEs at or just below the 0.5 level. In other words, it's very unlikely - though not impossible - to have an LL test with all iterations but one "outlier" having ROE < 0.5. The non-impossibility of this is one more reason to use independent code/hardware/random-shift to ensure the first-time test and DC don't both use the same hardware, software and data. |
[QUOTE=retina;126452]it seems the random-shift can reproduce a bad result if by chance it chooses the same random shift. [/QUOTE]
An exponent is not considered double-checked until a different shift count is selected by prime95. Thus, there is a 1 in 20 million chance that a double-check assignment will result in wasted work (the same shift count as the first LL test is chosen). |
My contention is that as long as a different shift is used,
the same computer running the same program has a negligible chance of duplicating an erroneous residue, whatever the source of the error(s). |
Correct! If we assume that the person has not fudged the results.
|
[quote=garo;126605]Correct! If we assume that the person has not fudged the results.[/quote]
To play "devil's advocate" for a minute, I can easily think of how to create a deliberately malicious software "bug" (which kept track of the shift, necessary anyway) which could produce identical wrong residues, regardless of the shift. |
Are hardware bugs capable of such malice?
|
| All times are UTC. The time now is 21:49. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.