![]() |
|
|
#12 |
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
9,497 Posts |
|
|
|
|
|
|
#13 |
|
May 2013
East. Always East.
110101111112 Posts |
I'm saying that there is always that unknown chance for a single bit error. Overclocking has nothing to do with this.
What I am trying to say is you can't ever guarantee that a mistake will not be made. You can only become reasonably confident, and that takes time. Suppose you do twelve months of DC's to stability test your system, and you got zero errors. All you can really say is your odds of failure for one month are less than one in twelve, if you consider each month to be statistically significant. In other words, your odds of succeeding for one month are greater than 91.7%. Now, [Greater than 0.917]12 = [Greater than 35.2%]. These are your odds of getting twelve consecutive successes. If I tell you that your odds of having a successful LL are better than a third, are you really inspired with confidence? "Now wait, I might have gone twelve months without an error, but did I not also go for 30 million seconds? My odds of failing any particular second are minuscule, so doesn't the interval length matter?" Of course it does. However, this boils down to something very familiar. Some people reading this might already know where I'm going. (1 - 1/30,000,000)30,000,000 = 36.79%. Your odds of not failing an iteration for 30,000,000 seconds are still only better than a third. Note that they could be anywhere above a third, including 100.000%. But that's all we can say for sure. You can keep making your results better by taking smaller intervals, but your odds of success are simply tending to e-1. Now, if you go for 24 months without an error, your odds of success per month are > 95.8%. Raise that to 12 (you're only doing a one-year test) and you get > 60%. Want to try seconds? You should get > 60.6%. If you want a formula, your odds of a successful year-long test after x years without a failure are only greater than e-1/x. 60.6% after two years. 81.8% after five years. 99.0% after 100 years. EDIT: I started writing this and was interrupted, and there were some replies in the mean time. Retina of course makes a good point: The larger tests have more chances of failure per length of time by themselves. So really, your chances of succeeding a one-year test after two years of testing are less than greater than 60.6%
Last fiddled with by TheMawn on 2014-06-26 at 00:18 |
|
|
|
|
|
#14 | |
|
"Kieren"
Jul 2011
In My Own Galaxy!
2·3·1,693 Posts |
Just to backtrack-
Quote:
Last fiddled with by kladner on 2014-06-26 at 01:24 Reason: add (s) |
|
|
|
|
|
|
#15 |
|
Jun 2005
USA, IL
193 Posts |
I protect my calculations from cosmic rays with a thick layer of dust over all the internal PC components.
|
|
|
|
|
|
#16 |
|
Undefined
"The unspeakable one"
Jun 2006
My evil lair
141248 Posts |
|
|
|
|
|
|
#17 |
|
May 2013
East. Always East.
11·157 Posts |
What I am trying laboriously to say, though certain individuals persist in trying to misunderstand me, is that as a personal example, despite my CPU having as clean a record as possible over a year, I would not trust it with a 100 million digit LL.
|
|
|
|
|
|
#18 | |
|
"Mr. Meeseeks"
Jan 2012
California, USA
87816 Posts |
Quote:
Maybe I am missing something. |
|
|
|
|
|
|
#19 |
|
Undefined
"The unspeakable one"
Jun 2006
My evil lair
22·32·173 Posts |
|
|
|
|
|
|
#20 |
|
"Kieren"
Jul 2011
In My Own Galaxy!
27AE16 Posts |
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| 32 cores limitation | gabrieltt | Software | 12 | 2010-07-15 10:26 |
| CPU cores | Unregistered | Information & Answers | 7 | 2009-11-02 08:27 |
| Running on 4 Cores | Unregistered | Information & Answers | 9 | 2008-09-25 00:53 |
| 6 Intel Cores | petrw1 | Hardware | 3 | 2008-09-16 16:33 |
| A program that uses all the CPU-cores | Primix | Hardware | 7 | 2008-09-06 21:09 |