mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Software

View Poll Results: Faster LL or more error checking?
Yes, faster is better. 16 30.77%
No, faster LL isn't worth the lost error checking. 18 34.62%
Make it a user option. 17 32.69%
No opinion, instead reprogram the server to assign me the 48th Mersenne prime. 1 1.92%
Voters: 52. You may not vote on this poll

Reply
 
Thread Tools
Old 2010-06-02, 03:37   #1
Prime95
P90 years forever!
 
Prime95's Avatar
 
Aug 2002
Yeehaw, FL

2×52×11×13 Posts
Default Faster LL tests, less error checking?

Would GIMPS be better off with LL tests that ran 2% faster but lost the error checking done every iteration?

Version 25 runs an imperfect error check every LL iteration. I've discovered a way to speed up version 26 by 1-2% (closer to 2%) but this error check is no longer cheap and easy.

Prime95 would still do the roundoff error check every 128th iteration.

Comments??
Prime95 is online now   Reply With Quote
Old 2010-06-02, 04:17   #2
axn
 
axn's Avatar
 
Jun 2003

469310 Posts
Default

Quote:
Originally Posted by Prime95 View Post
Comments??
If the FFT-boundary tests get stricter checking, it may be worth it.
axn is offline   Reply With Quote
Old 2010-06-02, 04:32   #3
Merfighters
 
Merfighters's Avatar
 
Mar 2010
On front of my laptop

7·17 Posts
Default

1. Yes, faster is better. : This is good for new computers.
2. No, faster LL isn't worth the lost error checking. : This is good for old computers, that has more errors.
3. Make it a user option. : It would be the best, so I voted this. How about checking the computer speed and errors to select the one that's best?
4. No opinion, instead reprogram the server to assign me the 48th Mersenne prime. : LOL, that's an awesome joke!
Merfighters is offline   Reply With Quote
Old 2010-06-02, 04:46   #4
only_human
 
only_human's Avatar
 
"Gang aft agley"
Sep 2002

2·1,877 Posts
Default

Quote:
Originally Posted by Prime95 View Post
Would GIMPS be better off with LL tests that ran 2% faster but lost the error checking done every iteration?

Version 25 runs an imperfect error check every LL iteration. I've discovered a way to speed up version 26 by 1-2% (closer to 2%) but this error check is no longer cheap and easy.

Prime95 would still do the roundoff error check every 128th iteration.

Comments??
I was wondering if removing the imperfect iteration error check adds flexibility or freedom for more improvements later too.

Also what are the consequences? If errors got caught at the 128th iteration rounding check then it is just a minor delay of error detection for faster throughput -- but if it increases undetected errors that is a different consideration.
only_human is offline   Reply With Quote
Old 2010-06-02, 06:01   #5
S485122
 
S485122's Avatar
 
Sep 2006
Brussels, Belgium

2·5·157 Posts
Default

During ordinary LL tests, be they first or double-check, a balance should be made between speed an accuracy. At the moment there a bit more than 4 % bad results. A lot of them have no error code. Would the proportion of bad tests with no error code increase if you remove the error checking at each iteration ? What is the penalty in speed if you keep the "imperfect" error checking with the speedier test ? What penalty in speed would you have with a better error checking...

Since you have (one of) the fastest FFT routines is improving speed really necessary if reliability decreases ?

In view of the magnitude of the project I vote for reliability over speed...

I suppose that the error checking in the torture test would not be removed since it is of a different nature (checking the computed results versus known results ?

Jacob
S485122 is offline   Reply With Quote
Old 2010-06-02, 07:56   #6
ATH
Einyen
 
ATH's Avatar
 
Dec 2003
Denmark

55658 Posts
Default

I vote for more speed if there is still roundoff checking every 128th iteration.

Do you have any statistics how often that error check finds errors? Is that all the "Suspect LL"?

Any statistics on the server how often suspect LL turn out to be correct and how often its bad?
ATH is offline   Reply With Quote
Old 2010-06-02, 10:20   #7
ET_
Banned
 
ET_'s Avatar
 
"Luigi"
Aug 2002
Team Italia

2×2,383 Posts
Default

George, you have always found the right way to optimize "our" code, so I guess you have some ideas on the subject.

Is faster better? It depends on reliability, and you are the one who can evaluate how reliable the faster results would be.

BTW, you said

"Version 25 runs an imperfect error check every LL iteration"

That means that you spotted something needing correction. But then you added:

"I've discovered a way to speed up version 26 by 1-2% (closer to 2%) but this error check is no longer cheap and easy.

Prime95 would still do the roundoff error check every 128th iteration."

Are you going to leave in place only the roundoff check after every 128th iteration? Is that enough to recover a bad result? Only you can tell...

Or. even better, only stats geeks who can check error codes from the server can tell:

- How many "bad results" turned correct after a triple check?
- How many "bad results" are affected by the glitch in the check routine?
- How many of them would be trapped by the check on 128th iteration?
- What would be the impact on residual checking every n iterations?

Luigi
ET_ is offline   Reply With Quote
Old 2010-06-02, 10:38   #8
retina
Undefined
 
retina's Avatar
 
"The unspeakable one"
Jun 2006
My evil lair

572710 Posts
Default

The poll needs another option.

5. Just do whatever makes the project progress faster.

If a 2% speed-up can, on average, adequately compensate for some extra errors not being detected then just do it.
retina is online now   Reply With Quote
Old 2010-06-02, 10:46   #9
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

17·251 Posts
Default

Quote:
Originally Posted by ET_ View Post
BTW, you said

"Version 25 runs an imperfect error check every LL iteration"

That means that you spotted something needing correction. But then you added:

"I've discovered a way to speed up version 26 by 1-2% (closer to 2%) but this error check is no longer cheap and easy.

Prime95 would still do the roundoff error check every 128th iteration."

Are you going to leave in place only the roundoff check after every 128th iteration? Is that enough to recover a bad result? Only you can tell...
Or he meant that there was nothing needing correction, but the error check uses an algorithm that's "imperfect" in that it won't find all errors, but only some.
Quote:
Originally Posted by retina View Post
The poll needs another option.

5. Just do whatever makes the project progress faster.

If a 2% speed-up can, on average, adequately compensate for some extra errors not being detected then just do it.
Yes, I would vote for this.
Mini-Geek is offline   Reply With Quote
Old 2010-06-02, 12:52   #10
only_human
 
only_human's Avatar
 
"Gang aft agley"
Sep 2002

72528 Posts
Default

Quote:
Originally Posted by retina View Post
The poll needs another option.

5. Just do whatever makes the project progress faster.

If a 2% speed-up can, on average, adequately compensate for some extra errors not being detected then just do it.
This is my choice too.

There is also the a purely emotional consideration too however that deserves a bit of consideration before moving on; since LL tests take a decent amount of time on a user's machine it might be a bit of a buzz-kill to be dwelling on the test not being as valid as possible even if overall statistics favored running running faster tests that are slightly less accurate.

Here is another thread that discussed LL error checking although it was mostly discussing the rounding test:
http://www.mersenneforum.org/showthr...679#post207679

The gist of the thread is that the nature of the errors and the steps taken with shifted residues make LL double checks very effective and false negative results virtually impossible.

ewmayer opines that regular PC hardware error rates are more likely than floating point rounding errors.

My last thought is that if the iteration test is not perfect and all the other tests are adequate, then the iteration test is not necessary.

I now quote jasonp's message in that thread:
Quote:
Originally Posted by jasonp View Post
The nature of the arithmetic used in the LL test is such that if one number out of millions is wrong in a given iteration, that error is propagated to the entire number in the next iteration. So if you even get one error in the course of an LL test, nothing beyond that point will be the same. So your hardware error would have to be in the same place at the same time to also fool a doublecheck run, which is sufficiently unlikely in a weeks-long test that nobody assumes it will happen.

The other danger in a long run like that is actual incorrect rounding, which would affect even perfectly working hardware. We use balanced representation, i.e. represent multiple-precision numbers as an array of positive or negative words. Multiplication results with this scheme are actually centered about zero on average, and the parameters chosen actually allow the possibility of a multiplication result not being able to fit in a 53-bit floating point mantissa. So if enough of the words in a multiple precision number had the same sign, the convolution would fail, even with no rounding error.

To combat this possibility, double check runs compute the Lucas-Lehmer test with the initial residue multiplied by a random power of two. This still allows the original residue to be recovered, but changes all the intermediate results used to compute it, and so a freakish roundoff error failure is not expected to repeat exactly. If instead the double check incurs its own freakish roundoff error, this is also okay; you can keep doing doublechecks until two runs with different initial shifts until you get a pair of them where freakish roundoff error does not occur.

So, bottom line, no archaeology on previous results is needed.
only_human is offline   Reply With Quote
Old 2010-06-02, 13:32   #11
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

17×251 Posts
Default

Quote:
Originally Posted by only_human View Post
There is also the a purely emotional consideration too however that deserves a bit of consideration before moving on; since LL tests take a decent amount of time on a user's machine it might be a bit of a buzz-kill to be dwelling on the test not being as valid as possible even if overall statistics favored running running faster tests that are slightly less accurate.
Hence there being the "No, faster LL isn't worth the lost error checking." and "Make it a user option." options. If you think it would be bad (bad for GIMPS through weaker interest due to an LL having a higher chance to go wrong through no fault of their own...or just you don't like it) to have LLs be that much more unstable, even if it helps the overall speed of the project, vote for one of those.

Last fiddled with by Mini-Geek on 2010-06-02 at 13:34
Mini-Geek is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Fast and robust error checking on Proth/Pepin tests R. Gerbicz Number Theory Discussion Group 15 2018-09-01 13:23
Probabilistic primality tests faster than Miller Rabin? mathPuzzles Math 14 2017-03-27 04:00
Round Off Checking and Sum (Inputs) Error Checking Forceman Software 2 2013-01-30 17:32
Early double-checking to determine error-prone machines? GP2 Data 13 2003-11-15 06:59
Error rate for LL tests GP2 Data 5 2003-09-15 23:34

All times are UTC. The time now is 16:51.

Mon Sep 21 16:51:32 UTC 2020 up 11 days, 14:02, 1 user, load averages: 1.73, 1.61, 1.60

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.