![]() |
[QUOTE=James Heinrich;126336]I just noticed I've been assigned M46000987 for double-checking, but this computer was the same one that did the original LL (finished 26jan2008). Isn't the idea of double-checking both to run the test again, but specifically on different hardware?[/QUOTE]
The server makes no attempt to make sure a double-check assignment is given to different hardware. The odds of an assignment being given to the same hardware years later is remote (once we have tens of thousands of users). Last night I opened up all exponents above 25 million for double-checking. Prior to that no exponents were available for double-checking (other than the few you had run the first LL test on). I agree with garo, throw that big double-check back and get a smaller one. |
[QUOTE=Prime95;126354]The server makes no attempt to make sure a double-check assignment is given to different hardware. The odds of an assignment being given to the same hardware years later is remote (once we have tens of thousands of users).[/QUOTE]
George, would it be terribly difficult to make the server code be able to check the *software* being used for first-time tests and DCs? I suspect my initial release of a Primenet-enabled Mlucas will not have support for the random-power-of-2-shift-in-initial-seed trick, so it would be important there to make sure Mlucas is not used for both tests. It might well be moot since by the time the wavefront gets far enough ahead to make this a real possibility the code will hopefully support the random-shift, but I can make no guarantees at the moment, and would rather be safe than sorry. |
Isn't "different hardware" something of a red herring, in that
double precision floating point calculations are (hopefully) identical on different machines? |
[QUOTE=ewmayer;126373]George, would it be terribly difficult to make the server code be able to check the *software* being used for first-time tests and DCs?[/QUOTE]
I think the get-assignment PHP code has enough information to detect this scenario. I know that it currently does not make the check, relying on raw probabilities to limit the CPU waste. I'll add it to my wish list. |
[QUOTE=Prime95;126354]I agree with garo, throw that big double-check back and get a smaller one.[/QUOTE]I did. I threw it back in the pot this morning and it looks like I've been assigned M23000009 :smile:
|
[quote=davieddy;126395]Isn't "different hardware" something of a red herring, in that
double precision floating point calculations are (hopefully) identical on different machines?[/quote]If such hope were always justified, there'd be no need for doublechecks! |
[quote=cheesehead;126407]If such hope were always justified, there'd be no need for doublechecks![/quote]
I thought the main reason why doublechecks were done was to ensure that the first-pass test was not done on an unstable machine producing bad results, rather than to test it on a different-architecture machine as everybody seems to be implying here? |
My original question regarding not doublechecking with the same machine was about the same physical machine (on the very small chance that it would produce the same incorrect result twice). Ideally first-time and double-check tests would be run on something completely different (different architecture, different OS, AND different software). Such extreme variation isn't needed, but my point was it would seem a better idea to have someone else check my work, rather than me.
|
I don't know what the primary reason for the double check is. But if one of the reasons is to catch a software bug then it seems the random-shift can reproduce a bad result if by chance it chooses the same random shift. This would not affect the finding of a prime since those tend to get checked pretty thoroughly, but it could theoretically miss a prime. Maybe the odds of it choosing the same shift are too tiny to be concerned about? Certainly the same machine, same software [b]and[/b] same shift would be pointless.
[edit]I guess a software bug could be interchanged with a CPU bug, either way the reasoning above still works.[/edit] |
[QUOTE=retina;126452]I don't know the primary reason for the double check is. [/QUOTE]
Double-checking is mainly to detect hardware instabilities due to e.g. overclocking, bad memory chips and such - which would result in bad results occationally. |
I think the motive behind getting results from different users is not so much that two tests with different shifts on the same machine could produce the same incorrect residue - the chances of that are extremely small - but that the user may decide to pad their stats by sending in a second fake result given that they know the residue from the first time they did the test.
The primary reason for doublechecks is simply to guard against hardware errors. And we get a lot of those. About 1.5% of tests returned to Primenet are bad. now that may be due to excessive overclocking, bad hardware, dust in the fan, cosmic ray or something else. But the fact is that we need a doublecheck to make sure a prime does not get missed by a faulty test. |
| All times are UTC. The time now is 21:49. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.