![]() |
|
|||||||
| View Poll Results: Faster LL or more error checking? | |||
| Yes, faster is better. |
|
16 | 30.77% |
| No, faster LL isn't worth the lost error checking. |
|
18 | 34.62% |
| Make it a user option. |
|
17 | 32.69% |
| No opinion, instead reprogram the server to assign me the 48th Mersenne prime. |
|
1 | 1.92% |
| Voters: 52. You may not vote on this poll | |||
![]() |
|
|
Thread Tools |
|
|
#56 |
|
"Forget I exist"
Jul 2009
Dartmouth NS
210D16 Posts |
Rhyled is making sense, I think it should be part of first test things that will be tested as it will show over time an average number of maximum error free clock cycles that the program can do given the machine.
|
|
|
|
|
|
#57 | |
|
P90 years forever!
Aug 2002
Yeehaw, FL
205716 Posts |
Quote:
The SUM(INPUTS) error check during a torture test lets prime95 notice a problem a little sooner -- you don't have to wait until the end of the test for the residue comparison. Alternatively, one could argue that a 3% more efficient LL test pushes the CPU a tiny bit harder making prime95 a tiny bit more likely to detect an unstable machine. The best way to handle this is to run both FFTs during the torture test. |
|
|
|
|
|
|
#58 |
|
May 2010
778 Posts |
Hi. My name is Rhyled, and I'm an overclocker.
Why settle for a 2-3% gain, when 35-50% gains are possible on Core i7's? Sadly, I had to settle for only 40%, as I can't quite get a 4GHz Core i7 920 processor to keep stable 24/7 - which is how I got drawn into this entire GIMPS thing. First I started running the Prime95 torture test, then I got hooked on the Mersenne Prime search. Now that my concerts about the SUM check risk have been addressed, can I change my vote from "keep checking" to "user optional"? |
|
|
|
|
|
#59 | |
|
P90 years forever!
Aug 2002
Yeehaw, FL
17·487 Posts |
Quote:
If I overclocked more than that I would opt to run the slower LL test with more error checking. |
|
|
|
|
|
|
#60 |
|
"Kyle"
Feb 2005
Somewhere near M52..
2·33·17 Posts |
Given what has been stated above, I am in favor of this with the following caveats:
1. It should be a user option (as reliable machines will be unaffected by this change) 2. Double-checks will always use the extra error checking I believe these sentiments are shared above with at least a couple individuals. |
|
|
|
|
|
#61 | |
|
Jun 2010
218 Posts |
Quote:
I really should have been easier on the poor 920, but around 3.8GHz I decided I was going to get 4.0 even if I smoked it in the process. I kinda think this should be something the program can decide. Maybe if it detects a hardware error of some type, or a bad result, a setting gets written to the config and from then on it runs the extra check. Last fiddled with by Colt45ws on 2010-06-17 at 08:16 |
|
|
|
|
|
|
#62 | |
|
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
11×389 Posts |
Quote:
People who care about their own credit numbers more than GIMPS's needs (needs like more DCs need to be done) will avoid double checks if they know of this, because it'll take longer to do the same credit worth of work. Perhaps to compensate, tests with error checking enabled through the whole test (to prevent people enabling it for the last few iterations to earn more credit at a faster speed) should earn slightly more credit. After all, they're slightly more reliable and they take slightly longer, so why not give slightly more credit for it? (the bonus should be set to compensate for the speed penalty as exactly as possible) That would take credit out of the question of error checking. |
|
|
|
|
|
|
#63 |
|
"Kyle"
Feb 2005
Somewhere near M52..
2·33·17 Posts |
I think the above is a great idea.
|
|
|
|
|
|
#64 | |
|
Jul 2006
Calgary
52·17 Posts |
Quote:
|
|
|
|
|
|
|
#65 |
|
Sep 2010
Scandinavia
26716 Posts |
Yes, I don't see how this issue could be any more complex than that equation.
|
|
|
|
|
|
#66 | |
|
Jan 2010
11100102 Posts |
Quote:
2. Those with unreliable results lose the ability to disable the future check. 3. Some results (eg 0.1-5%) from all machines should be double checked ASAP, to get a server estimated reliability check of the machine's results. This way machines do not continue ad infinitum to submit erroneous results without checking. 4. The frequency of early double checks should decrease the longer a machine delivers reliable results. Start off at 5%, and drop to 0.1% over time? Or even start at 100%, ie the first result by a new machine is double checked ASAP, then the 21st, then the 100th, then the 500th, etc. 5. How to determine _if_ to credit wrong results might be the wrong question (unless you plan to ban machines that deliver frequent wrong results...) [Sorry to reopen this thread] |
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Fast and robust error checking on Proth/Pepin tests | R. Gerbicz | Number Theory Discussion Group | 15 | 2018-09-01 13:23 |
| Probabilistic primality tests faster than Miller Rabin? | mathPuzzles | Math | 14 | 2017-03-27 04:00 |
| Round Off Checking and Sum (Inputs) Error Checking | Forceman | Software | 2 | 2013-01-30 17:32 |
| Early double-checking to determine error-prone machines? | GP2 | Data | 13 | 2003-11-15 06:59 |
| Error rate for LL tests | GP2 | Data | 5 | 2003-09-15 23:34 |