![]() |
Alright! That's enough now. I'm locking this thread.
EVERYONE, please keep your comments civil and the discussion on-topic. TTn, clearly it seems that public opinion is against your comments directed at George. So please do not use this forum as a place to diss George and Prime95 anymore. Ernst and others, name-calling has no place on this board. Please desist from making such inflammatory posts whatever the provocation. |
M40? Status thread: get your news here
Since the previous thread suffered lockage I thought I'd open this new thread for people to discuss the status of M40 (or not).
Recap: On June 1st a prime was reported to the primenet server. George revealed that this one if verified will be the largest ever found but not a 10 million digit prime. Speculation is rife as to what that exponent is. Most guess it is in the 16-19M range. Ernst Mayer starts his official verification on Sun hardware using Mlucas and Guillermo Valor starts his on a Mac using Glucas (I think). George starts an "unofficial" verification on his overclocked P4 using prime95. This verification does not count as it uses the same hardware and software as the discovering computer. On June 10th George reports that his verification run came out negative. He feels his tinkering with the P-1 code the day before may have something to do wth it. But maybe not. More bad news soon arrives as it is revealed that the original test had 5 errors which gives us a 50% chance that the original test was wrong. George reports that his residues match to the 14million with Guillermo's residues and he restarts his run at 14 million in case his P-1 debugging inadvertently caused an error in his verification run. |
My rerun matches at iteration 16 million. This eliminates an OS error when I was debugging an out-of-memory bug as a cause for the not prime result.
Not looking good at this point :( |
Thanks for this thread "Voice of Reason", I hear and obey. :D
I think it's going to test out fine. I'd even be willing to wait another couple of weeks to find out. :D EDIT: George posted before me. Eek... Ok, but I still think it's going to be ok. :D |
George, when will the double checks be done so we can know for sure?
|
Guillermo's should finish tomorrow.
|
Dang, if this has gone to 16M then my suspected candidate isn't.
|
I think it may be around 14M, near M39.
(like 2976221 & 3021377) |
Since George's rerun from 14M and Guillermo's run have both now completed with matching results, we no longer have to wait for my run (which I halted last night, as soon as it was clear that the above 2 runs matched at 16M, eliminating George's out-of-core debug work as a candidate for corruption of his run) to know that the number is not in fact prime.
FYI, the reason Guillermo's run would not have been sufficient in case of primality has nothing to do with his code (which is excellent), it's just that because of my work with Richard Crandall I'm better-known in the number theory community, which is important for establishing credibility of the result in the latter. Since we'll be reporting no result (or perhaps just the negative one, to people who were notified about the initial possible prime discovery), that PR-oriented extra mile isn't needed. |
It is atleast 16M according to George becuase he was comparing residuals at that point, plus it was still going to take a day to complete the unofficial DC on a P4-2135. Tis exciting though regardless of the result.
EDIT: Maybe was exciting is more reasonable. :( |
So have the tests already been finished?
|
Can we get a ballpark figure on where this candidate was so my poll won't be utterly useless? :D
Bummer. Glad I didn't go around telling too many people. |
Well, if it turns out not to be prime (as seems likely) we will just have to work extra hard to find another prime.
|
[quote="jeff8765"]So have the tests already been finished?[/quote]
Yes, George's test (using Prime95 on his ~2.5GHz P4) and Guillermo's (using Glucas in multithreaded mode on a dual-CPU 1.1GHz Itanium, which was thus running at about the same speed as George's Prime95 run) have both completed, with matching nonzero Res64s. I'll leave any announcement of the precise exponent to George, but will say that it was slightly less than 17M. |
Without being too cute (OK, well perhaps somewhat cute), the candidate was very nearly the same size as the 24th Fermat number, which latter number I have some personal experience with.
It's up to George to decide whether he wants to divulge the actual exponent. |
I don't think the exact exponent should be announced, as that might encourage fame seekers to falsify reports, and the poor sod who thought he had a prime might not want to be identified. I know if I had what I thought was a prime find that turned out to be composite, I would want it swept under the rug as quickly as possible. :D
|
It is official - not prime. Both Guillermo's Mlucas run and my prime95 run return a matching non-zero residue.
We will never know what caused prime95 to generate a false positive, I am studying the code for ways in which a memory corruption could cause this. Something good will come from this sorry ending. I am confidant this incident will have little negative impact on GIMPS. While the episode was probably an unhappy roller-coaster ride for one individual, the false positive problem is far less damaging to GIMPS than the version 17 shift bug disaster. This incident illustrates why most other distributed projects keep any client finds secret (even from the discoverer) until verified. If we had a similar policy this could have been swept under the rug and no one would ever have known. I kind of like our policy though. It lets everyone in on the ups and downs of the project. Thanks to Guillermo and Ernst for dedicating time to the verification run. |
M40, what went wrong?
Warning, technical talk follows:
It was once thought that a false positive report "couldn't happen". So what went wrong? So far I've come up with two possibilities. 1) The FFT data is zeroed AND does not go into the -2,2,2,2 loop. I don't know how the data gets zeroed, but I've seen it happen. It has happened a lot less often once code was added to reject any save file with all zero data. The not going into the -2,2,2,2 loop can happen with the corruption of a single local variable. I've just added code that makes sure this local variable is always in the range 0 <= variable < exponent. 2) This case results from the way my C compiler treats floating point NaN. NaN stands for not a number. If NaN is converted to an integer, the integer is zero. So if the FFT data is all NaNs, prime95 will report a prime. Prime95 check for NaNs every iteration, but if every FFT data value becomes NaN after the inverse FFT of the last LL iteration, then we get a false positive. Furthermore, corrupting a single value (the initial carry input to rounding and carry propogation code) could set every FFT data value to NaN. I've fixed the code to make sure there are no NaNs in the final is-it-a-prime check. Which of the above is more likely? I don't know. It seems the first requires two or more pieces of memory corrupted, but it leads to a steady state. So the errors can occur at any point during the LL test. The second case requires only one failure, but at a very specific point in time. |
If any number is a prime, the LL will end with a 0. Because this is a false positive, therefore the last number would have been a 0. If it zeroed, then it would have been a 2 at the very end, and not a 0. How could zeroing cause a false positive? (unless it was really really bad luck)
|
In case of a zero residue in the low-order 64 bits, does the program check the other n million bits as well? Granted, the chance of a non-zero residue having the 64 low-order bits all zero is 1 in 2^64, about 1 in 1.84x10^19, so we wouldn't expect it to happen for a long, long time, but it isn't impossible!
|
[quote="asdf"]If any number is a prime, the LL will end with a 0. Because this is a false positive, therefore the last number would have been a 0. If it zeroed, then it would have been a 2 at the very end, and not a 0. How could zeroing cause a false positive? (unless it was really really bad luck)[/quote]
Case three causes the subtract two to not take place. Thus, if the fft data was zeroed (and that key variable was trashed) then you would get: 0^2 = 0, 0^2 = 0, 0^2 = 0, ... Case four assumes the fft data is zeroed after the last LL iteration (including the subtract two), but before the is-this-a-prime-result code is executed. |
[quote="philmoore"]In case of a zero residue in the low-order 64 bits, does the program check the other n million bits as well? Granted, the chance of a non-zero residue having the 64 low-order bits all zero is 1 in 2^64, about 1 in 1.84x10^19, so we wouldn't expect it to happen for a long, long time, but it isn't impossible![/quote]
Yes, every FFT word is checked. |
[quote="ewmayer"]It's up to George to decide whether he wants to divulge the actual exponent.[/quote]
Maybe we need a new poll???? Or maybe we could announce it if everyone promises not to go back to an old cleared results report to see who submitted the result. I don't really want to add to his frustrating experience! |
Would it be possible for the result to be checked every million iterations or so and if the result is zero then throw out the test. Because if the result of any iteration is zero and the client is working wouldnt all of the following results be -2 anyway. That way if the client zeroed and was no longer subtracting 2 it would be caught after a million iterations. However, I do not know how much time would be wasted checking for a zero result.
|
It really doesnt matter to me anyway what the actual exponent is. It would be nice to have a general area of the exponent though. If you told us down to the nearest ten thousand or hundred thousand it would be impossible to find out who the person was that submitted it.
|
I think it's up to the one who did it.
|
To whom ever did the test,
Could you rerun the test over again to see if it's not a bug with your computer? |
Re: M40, what went wrong?
[quote="Prime95"]2) This case results from the way my C compiler treats floating point NaN. NaN stands for not a number. If NaN is converted to an integer, the integer is zero.[/quote]
Is it prime95 that does the integer conversion, or the compiler? If the former, why not just do the compare-with-zero in floating-point form, e.g. double arraydata[n]; ... if(arraydata[i] == 0.0) ... [quote]So if the FFT data is all NaNs, prime95 will report a prime. Prime95 check for NaNs every iteration, but if every FFT data value becomes NaN after the inverse FFT of the last LL iteration, then we get a false positive.[/quote] It seems EXTREMELY unlikely to me that a run would have valid (non-NaN) data every iteration, then go awry on the very last step, the rounding-and-carry-propagation following the final IFFT. Perhaps something slipped through the on-the-fly NaN checking. Or perhaps there were in fact no NaNs at all in the run in question, but the shift count related to the subtract-2 step (or some other datum used in the carry step) getting corrupted caused the residue to get zeroed in the carry step without triggering any roundoff warnings. Actually, since prime95 only does RO checking every so often (I believe every 100th iteration or so), it's possible there may in fact have been a suspiciously large RO error in the crucial carry step that got missed isn't it? Another related scenario would be that the inverse-base or inverse-DWT value one multiplies by during the carry step got corrupted and became very small (or even zero). That would cause all the IFFT data to effectively get divided by some huge number (but no actual NaNs would be involved), causing the result to get rounded to zero without necessarily having any large fractional errors. In that scenario, the subtract-shifted-two might still be happening properly, but the resulting residue digit would promptly get divided by some huge number and the result rounded to zero. But if that happened on all but the final few hundred or thousand iterations, one would expect to see a zero residue vector get written to a savefile and detected that way. [quote]Furthermore, corrupting a single value (the initial carry input to rounding and carry propogation code) could set every FFT data value to NaN. I've fixed the code to make sure there are no NaNs in the final is-it-a-prime check.[/quote] Again, if this happened at any point but the last iteration of the test, wouldn't your per-iteration NaN check catch it? Bottom line: we'll never be able to guard against every possible type of hardware (or even software, although we hope we have more control over the latter) error. A reasonable way to proceed next time the server reports a possible prime is to first get the user's logfile (or check the number of errors reported to the server during the run, assuming you start collecting such data as you said you intended to do), then rerun the final iteration cycle of the test from the user's savefile, which you say will no longer get deleted using the upcoming patched version of the code. If that savefile shows valid data (not all zero, and with a valid checksum) and the rerun indicates primality, then start the formal independent-software verification. |
[quote="jeff8765"]It really doesnt matter to me anyway what the actual exponent is. It would be nice to have a general area of the exponent though. If you told us down to the nearest ten thousand or hundred thousand it would be impossible to find out who the person was that submitted it.[/quote]
[quote="ewmayer"]the candidate was very nearly the same size as the 24th Fermat number[/quote] That gives the exponent to within 100K. Good enough? p.s.: F24 = 2^(2^24) + 1 = 2^16777216 + 1 . |
[quote="jocelynl"]To whom ever did the test,
Could you rerun the test over again to see if it's not a bug with your computer?[/quote] Two independent re-runs (one using a different program on non-x86 hardware) have been done, with matching results. That confirms that something went wrong with the original run, which George suspected as soon as he had a look at the user's logfile a few days ago (right after his initial re-test had finished, indicating non-primality) and saw multiple checksum errors reported during the course of the user's run. Most likely a hardware problem with the computer, though we'll probably never know precisely what happened. |
We don't have any technical details. It would be nice to know what type of hardware was used and overclock if not.
|
[quote="jocelynl"]We don't have any technical details. It would be nice to know what type of hardware was used and overclock if not.[/quote]
Guillermo's run was using his Glucas code in multithreaded mode on a dual-CPU 1.1GHz Itanium - not overclocked. Also, the fact that two different programs running on different hardware gave matching NONZERO residues makes the odds of both these runs being incorrect miniscule. Also, at the time the above 2 runs completed, I had a third run underway using my Mlucas code on a single 1GHz Alpha ev68 processor. That run had given interim (every 1M iterations) Res64s that agreed with George and Guillermo's up to 9M, at which point I killed the run, since I was satisfied that the number in question was not prime and didn't want to burn another 8 or 9 CPU-days on it. |
[quote="jeff8765"]It really doesnt matter to me anyway what the actual exponent is. It would be nice to have a general area of the exponent though. If you told us down to the nearest ten thousand or hundred thousand it would be impossible to find out who the person was that submitted it.[/quote]
Nope, not impossible. Especially given that we know what day it was submitted. |
In that case it probably isnt a good idea for us to know it any more accurately than we already do without the consent of the tester. Btw, thanks ewmayer.
|
so what happens now since M40 was bogus?
So what happens next now that we have found that M40 was not prime? What changes are going to happen? Was this a serious flaw? How could it have happened? Is Prime95 still reliable? I am curious as to what happens next. Thanks for the responses. :)
william |
I do not really know, but I think that it was just a freak accident and that there is no inhernet flaw in Prime 95. I think that we will probably just continue as we were before M40 was reported.
|
[quote="jocelynl"]We don't have any technical details. It would be nice to know what type of hardware was used and overclock if not.[/quote]
P4 1.6 GHz Willamette, not overclocked |
Ah, v17. Lost a few tests, but hey, I'm still here. Thanks for the ride, it's bound to get only better in the next 10 years. ;) ;)
I only win at Roullete. :D :D :D |
We keep looking for M40. :D :D :D
|
:surprised:ops: :surprised:ops: prime 40 must be getting close, will it be this month???
|
I think it is obvious that [url=http://www.mersenneforum.org/viewtopic.php?p=2459&highlight=#2459]this[/url] is to blame for the whole "M40" situation. It's obvious the universe was having a hard time coping with this paradox, and it snapped in a very strange way.
I beg you Cheesehead, don't push things this far again or who knows what could happen. :shock: |
Re: M40, what went wrong?
[quote="ewmayer"]Is it prime95 that does the integer conversion, or the compiler? If the former, why not just do the compare-with-zero in floating-point form.[/quote]
It was the C compiler. I've written a very easy assembly routine to check for NaN and infinity. This way I can be sure it will work on Linux, FreeBSD, OS/2, etc. In 23.5 prime95 will check for NaN and infinity before checking for zero. |
Re: M40, what went wrong?
[quote="ewmayer"]It seems EXTREMELY unlikely to me that a run would have valid (non-NaN) data every iteration, then go awry on the very last step, the rounding-and-carry-propagation following the final IFFT. Perhaps something slipped through the on-the-fly NaN checking. Or perhaps there were in fact no NaNs at all in the run in question, but the shift count related to the subtract-2 step (or some other datum used in the carry step) getting corrupted caused the residue to get zeroed in the carry step without triggering any roundoff warnings. Actually, since prime95 only does RO checking every so often (I believe every 100th iteration or so), it's possible there may in fact have been a suspiciously large RO error in the crucial carry step that got missed isn't it?[/quote]
Several facts to consider: Every iteration prime95 sums all the input FFT values and all the output FFT values. If sum_input^2 != sum_outputs, then you get a SUM(INPUTS) != SUM(OUTPUTS) error. The output sum is also checked for NaN or infinity. If the sum is one of these values, then you get an ILLEGAL SUMOUT error. If carry-propogation & round-off generates NaNs, then that should be picked up on the next LL iteration. The last 50 iterations of every LL test do the extra round-off error checking. My records show 6 LL residue results of 0000000000000002. Thus, there obviously is some way a hardware error can zero the FFT data. I just don't see an easy way for this to happen. Any ideas? Would a check each iteration for zero or two be a good idea? |
[quote]Would a check each iteration for zero or two be a good idea?[/quote]
Depending on how much time that costs. Maybe check it once every million iterations, or even better, prior to writing a new save file. If the residue at that point is 0 or 2, you can go back to the last save file. |
Is it possible for an LL test running accurately to go to 0, 1, or 2 before completion?
|
[quote="asdf"]Is it possible for an LL test running accurately to go to 0, 1, or 2 before completion?[/quote]
No. And a check every iteration would be real cheap. |
One other thought: istn't is possible for a malicious user to deliberatly report a false positive?
Of course i don't say this was the case here but should we not consider this possibility for the future? |
Re: so what happens now since M40 was bogus?
[quote="wfgarnett3"]So what happens next now that we have found that M40 was not prime?[/quote]
There [b]wasn't any[/b] M40. Yet. :) (M40 would designate the 40th known Mersenne prime, and for a while we thought we'd found it, but now we know that we haven't yet found the 40th Mersenne prime.) |
I know this is highly unlikely but is it possible that there was some random error and the zero just happened as a consequence?
|
[quote="garo"]I know this is highly unlikely but is it possible that there was some random error and the zero just happened as a consequence?[/quote]
I have an article from IBM Systems Journal, talking about softerrors caused by cosmic radiation. They did tests in airplanes and underground, plus looked at the effect of what happens to the hardware when hit by a particle, in regards to the cone shaped damage to the media. Let me see if I can find it: [url=http://domino.research.ibm.com/tchjr/journalindex.nsf/a3807c5b4823c53f85256561006324be/922f6bbd8495db1485256bfa0067fc4e?OpenDocument]Link to Article Summary[/url] The universe has a funny sense of humor. Bottom line, if you can test for this error condition some or all the time on the cheap, then you might as well do it. The fact that you were able to prove that it was a false positive with the same software/hardware base against different code and hardware confirms that it's not the software. Entropy wins. :) Without the save files, is it even possible to recreate this? I think the idea of saving the last save file when a possible prime is found is a great idea. How about the option of saving more of them, like 5 instead of 2, or is it a overkill because of the independent verification of all possibles? |
http://arstechnica.infopop.net/OpenTopic/page?q=Y&a=tpc&s=50009562&f=77909774&m=9090973925&p=1
|
(* In small apologetic voice *) Okay, trif, I am truly humbled by the effect my faux pas has had. Never again will I bracket an unsound claim of correctness of a personal assertion between dancing bananas -- it just ain't nacheral.
|
[quote="garo"]I know this is highly unlikely but is it possible that there was some random error and the zero just happened as a consequence?[/quote]
And even more unlikely, was the right exponent double checked? It's easy to make a typo when you're so exited ;) |
[b]George:[/b]
May I for one please thank you heartily for your approach to openness in this whole little debacle, as well as for the project as a whole. As you said, not all DC projects have this aspect, and even though it has turned out not to be M40 this time, I really enjoyed the frisson of excitement when the returned result in question was spotted and the subsequent rollercoaster ride and suspense through the checking processes. Thanks again for your responsive involvement with us, the users. |
There is a thread called "M40, what went wrong?" but I would have named it "M40, what went right?" because the level of communication and involvement offered to us is plain exciting! Plus, we are taking the lessons learned from this and using them to fix the problem...
It ain't a mistake unless it happens twice... When it happens once it is a learning experience... :) With most every other project I have ever worked with I have felt like a mushroom... Kept in the dark and fed crap! :shock: :D Not so with GIMPS! |
Re: M40, what went wrong?
[quote="Prime95"]Every iteration prime95 sums all the input FFT values and all the output FFT values. If sum_input^2 != sum_outputs, then you get a SUM(INPUTS) != SUM(OUTPUTS) error. The output sum is also checked for NaN or infinity. If the sum is one of these values, then you get an ILLEGAL SUMOUT error. If carry-propogation & round-off generates NaNs, then that should be picked up on the next LL iteration.[/quote]
Yes, but does the sum(outputs) get done on the FFT outputs PRIOR to rounding and carry propagation? Then, if some multiplier used in the carry step got zeroed, it could zero all the FFT ouptuts without triggering either the above checksum error or a roundoff error, couldn't it? |
Re: M40, what went wrong?
[quote="ewmayer"]Yes, but does the sum(outputs) get done on the FFT outputs PRIOR to rounding and carry propagation? Then, if some multiplier used in the carry step got zeroed, it could zero all the FFT ouptuts without triggering either the above checksum error or a roundoff error, couldn't it?[/quote]
Yes, it is checked prior to rounding and carry propagation. For the carry step to zero all the FFT data, then ALL the multipliers would have to be zeroed. Actually, as you know, there really are two multipliers to greatly reduce memory consumption. So you'd need to zero only 512 values to zero the entire FFT array. I'll add a quick check for zeroed FFT data after the rounding and carry propagation step. |
Re: M40, what went wrong?
[quote="Prime95"]I'll add a quick check for zeroed FFT data after the rounding and carry propagation step.[/quote]
Actually, assuming it's not in the first few dozen iterations, any appreciable fraction of zeros in the residue array should trigger some kind of warning. But it sounds like any kind of check along these lines would help cover data corruption the FFT checksum may have missed. |
Re: M40, what went wrong?
[quote="ewmayer"]Actually, assuming it's not in the first few dozen iterations, any appreciable fraction of zeros in the residue array should trigger some kind of warning.[/quote]
You read my mind! The (pseudo)code actually reads: if (iteration > 50 && iteration < p-2 && 50 consecutive fft values == 0.0) then { print error message, resume from last save file } Assuming 20 bits per double, the chance of 50 consecutive zeroes is 1 in 2^1000. That should be good enough! But just in case it isn't, if the same iteration has the problem twice in succession, then prime95 will accept the 50 consecutive zeroes. |
Re: M40, what went wrong?
[quote="Prime95"]Assuming 20 bits per double, the chance of 50 consecutive zeroes is 1 in 2^1000. That should be good enough! But just in case it isn't, if the same iteration has the problem twice in succession, then prime95 will accept the 50 consecutive zeroes.[/quote]
OK, perhaps I'm being too paranoid, but I think you should also just check for a suspicious total NUMBER of zeros in the vector, irrespective of whether they occur in a contiguous block of data. If your average base is (say) 2^20, on average we would expect just one zero digit in a length-2^20 residue vector. Thus, even fifty TOTAL zeros would be highly suspect. Try this: insert a snippet of code to count total maximum #zeros encountered on any iteration in a single LL test, and do a DC to get an idea of what the actual numbers look like. (Or you could just run a few 100K iterations.) |
Re: M40, what went wrong?
[quote="ewmayer"]OK, perhaps I'm being too paranoid, but I think you should also just check for a suspicious total NUMBER of zeros in the vector[/quote]
A fine idea, but how do we do that quickly? I chose my test because it looks at just one double before the && operator skips the remaining comparison operations. The fastest way to implement your idea is in the rounding and carry propagation code. And if you do it there, you won't catch the data values getting incorrectly zeroed as they are written to memory. |
Re: M40, what went wrong?
[quote="Prime95"][quote="ewmayer"]OK, perhaps I'm being too paranoid, but I think you should also just check for a suspicious total NUMBER of zeros in the vector[/quote]
A fine idea, but how do we do that quickly? I chose my test because it looks at just one double before the && operator skips the remaining comparison operations. The fastest way to implement your idea is in the rounding and carry propagation code. And if you do it there, you won't catch the data values getting incorrectly zeroed as they are written to memory.[/quote] OK, then I vote you check the small arrays of multipliers used in the carry step - all should be nonzero, and you could also implement some kind of checksum here (and one could do similar for the FFT data, although I don't know how big those tables are in your implementation.) Come to think of it, neither of these would need to be done every iteration - it just needs to be done at least once bwteen every savefile write, to prevent good data from being overwritten by bad. |
[quote]Come to think of it, neither of these would need to be done every iteration - it just needs to be done at least once bwteen every savefile write, to prevent good data from being overwritten by bad.[/quote]
Isn't that a waste of computer time if it fails between maybe 6 hours of non-saving? It should be done at intervals close together, and the data at the time should be stored in memory so it won't need to be saved to the disk. |
[quote="ewmayer"]Come to think of it, neither of these would need to be done every iteration - it just needs to be done at least once bwteen every savefile write, to prevent good data from being overwritten by bad.[/quote]
It should be done just prior to each savefile write, of course. Once one checks the data and finds it okay, there's no point in taking a chance of data corruption by performing more calculations before writing the savefile. - - - - - [quote="asdf"]Isn't that a waste of computer time if it fails between maybe 6 hours of non-saving? It should be done at intervals close together[/quote] No. If 6 hours of non-saving are considered too long, then the user should have set the savefile write interval to less than 6 hours. Again, what's the point of [b]not[/b] writing a savefile just after the data is checked and found to be okay? You don't want to take a chance of corrupting it before it's saved, do you? [quote="asdf"]the data at the time should be stored in memory so it won't need to be saved to the disk.[/quote] The point of writing a savefile to disk is to record the data is a place less subject to corruption or loss than in volatile memory. |
[quote="asdf"][quote]Come to think of it, neither of these would need to be done every iteration - it just needs to be done at least once bwteen every savefile write, to prevent good data from being overwritten by bad.[/quote]
Isn't that a waste of computer time if it fails between maybe 6 hours of non-saving? It should be done at intervals close together, and the data at the time should be stored in memory so it won't need to be saved to the disk.[/quote] I expected the every iteration check to take much more time. If it's in the order of seconds per test you might as well do it every iteration. If it would have taken 30 minutes per test to do the check every iteration it would have slow down GIMPS overall throughput (although not by that much). In that case missing 3 hours on just one pc (the error can happen anytime between savefiles) would have been much less. |
False M40 exponent?
What was the exponent of the false positive m40?
I would like to hear a good reason why we not be able to know it. If it is just another composite, then it shouldnt matter.... Unless there is some type of coverup due to various reasons, ie some exponents are more likely or prone to hacker error etc. :rolleyes: :? |
Ithink it is to protect the privacy of the person who returned the faulty result. If we know the result we can look at the old logs and figure out who returned the result.
|
If you are satisfied with knowing the general area the exponent is in, Ernest Mayer did announce that it's within 100K of 16777216.
See [url=http://www.mersenneforum.org/viewtopic.php?t=687]this thread[/url] for the relevant discussion. |
I dont buy the privacy thing, that doesnt make any sense.....
As if we know who it is, then the world will somehow end. Nahhh.. thanks anyway Garo! |
I smell a scam!
There really is no motivation, including frustration, for a person to not want to announce the exponent. There is a motivation however, for someone with a backdoor to the program, to not want anyone trying to see who submitted it. Paranoid, maybe, :evil: |
This message is for the user, that found the false positive M40.
You could create a new username here, and explain such motivation. |
I have got this!
Mersenne PrimeNet Server 4.0 (Build 4.0.031) Assigned Exponents Report 31 May 2003 01:00 (May 30 2003 6:00PM Pacific) (Just one day before the false M40 reported!!!!!) Does anyone want some useful information?(It's too large(7.1MB) to be sent by e-mail) |
Hmmm Why would you just happen to have that?
I am not familiar with it so maybe others are. Even if you narrow down possible members, it still doesnt provide motivation and is not deterministic. Although supposedly he/she is a long time member.(5yrs) All things being equal, the simplest explanation tends to be the right one. :D Getting to the bottom of things........... |
[quote="TTn"]Hmmm Why would you just happen to have that?
I am not familiar with it so maybe others are. [/quote] I certainly am familiar with it, as I gather this every 6 to 8 weeks or so. Producing stats is all about having information :D |
Garo, I would like to remind you of something you wrote just a fortnight ago, and here is the link to the post. [url]http://www.mersenneforum.org/viewtopic.php?p=5754&highlight=#5754[/url]
[quote="garo"]TTn, clearly it seems that public opinion is against your comments directed at George. So please do not use this forum as a place to diss George and Prime95 anymore. [/quote] I bring this up because TTn is clearly not heeding your words. I would go so far as to suggest that the 15k forum be removed if this behavior keeps up. I, and I'm sure many others, am getting quite sick of TTn. |
[quote="eepiccolo"]I'm sure many others, am getting quite sick of TTn.[/quote]
I am. GIMPS has lots and lots of credibility. We discuss things openly and logically, warts and all, without accusations. This is a thoroughly professional (without pay) project and, therefore, I agree with eepiccolo's sentiments. |
Well,
I'm inclined to be a bit tolerant for the moment. He/she is free to express his/her opinion as long as it is not done in an obviously insulting way. I think that forum members are smart enough to decide the merit of every claim and do not need my hand-holding :( |
Reasons for not identifying the exponent (and thus the user):
1. Discourages people from returning false results as a way of gaining some fame (or infamy). 2. Prevents nutballs from harrassing the user. I am a little tired of having George and GIMPS being accused of fraud, scams, or other crap. But it's useful for every forum to have its own pet whacko. |
Re: M40, what went wrong?
[quote="Prime95"][quote="ewmayer"]OK, perhaps I'm being too paranoid, but I think you should also just check for a suspicious total NUMBER of zeros in the vector[/quote]
A fine idea, but how do we do that quickly? I chose my test because it looks at just one double before the && operator skips the remaining comparison operations. The fastest way to implement your idea is in the rounding and carry propagation code. And if you do it there, you won't catch the data values getting incorrectly zeroed as they are written to memory.[/quote] Is it possible to just read groups of say 4 bits and then count how many of those groups have all 4 as zero and then say if at least 12 groups fit that criteria then rerun from last save file? |
Zombie thread resurrection...
Now that M50 has (almost definitely) been found, I looked back at this false alarm for M40. A few points: First, getting a false positive back in 2003 seemed like some people thought it would be embarrassing for the person who turned it in. I'm not sure why that was. We've had false positives since then that were either a software issue, or in one case it seemed like a bizarre hardware issue of some kind. It happens, nothing to be ashamed of. Second, just to put this to rest in this thread (it's been discussed elsewhere, but not in this thread), here's a link to the exponent in question. The original "is prime" result doesn't appear in the history, so no concerns there. And to be sure there wasn't a conspiracy to hide it, I ran my own test: [URL="https://www.mersenne.org/M16811549"]M16811549[/URL] Or maybe I'm part of the conspiracy! :smile: |
[QUOTE=Madpoo;475238]Now that M50 has (almost definitely) been found, I looked back at this false alarm for M40.[/QUOTE]
I understand why you have resurrected this thread. GIMPS takes pride in being a serious group. And we're often challenged with exceptional claims. We stand up to such claims, and continue to do things no one else can. |
[QUOTE=chalsall;475240]I understand why you have resurrected this thread.
GIMPS takes pride in being a serious group. And we're often challenged with exceptional claims. We stand up to such claims, and continue to do things no one else can.[/QUOTE] ...to boldly go where no computer has been before... |
[QUOTE=Madpoo;475238]Zombie thread resurrection...
Now that M50 has (almost definitely) been found, I looked back at this false alarm for M40.[/QUOTE] Thanks Madpoo! I've just read through this thread. Very insightful. I'm glad the new prime turned out to be real :smile: |
[QUOTE=Madpoo;475238]Or maybe I'm part of the conspiracy! :smile:[/QUOTE]
Could be possible. Do you work for a government-related company or do you want to get the money from a Foundation who are close to NSA/CIA? |
| All times are UTC. The time now is 04:35. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.