![]() |
|
|
#188 |
|
"Nathan"
Jul 2008
Maryland, USA
5·223 Posts |
I imagine that I did! Certainly all of the tests seemed to have run from beginning to end; the histories for these exponents show that the (repeated) results were spaced apart by a few weeks at a time, which would lead me to believe that it was during those weeks that the tests were running.
With my luck, had I been running LL tests on distinct exponents, rather than LL-ing these two until they were completely bereft of life, I probably would have found a new Mersenne prime. After all, *two* of them were found in the same time frame that these tests were being run (and re-run). |
|
|
|
|
#189 |
|
Serpentine Vermin Jar
Jul 2014
3,313 Posts |
I didn't suggest not accepting it... it would still count. The idea was to prevent a user from getting the same exponent assigned to them automatically that they already did once before. It happens randomly, especially with the large producers (Curtis).
I already know you deliberately like to double-check your own work. I hope you understand the greater purpose and general good of having double-checks done by someone else. I would hope you'd be willing to double-check other people's work and let them double-check yours. Is it because you're worried you might have had an error on the first one and if it ended up being prime, you'd feel like you missed out? Besides you, most of the self-verified results I've seen fall into these groups:
Fortunately that meant most of the self-verified stuff is under the current double-check wavefront. That's because Curtis and other large producers of results account for a pretty good chunk of them. Makes it easier to run triple-checks on those without a larger commitment of resources. Yes, someone determined to cheat already could. That's something I agree would be nice to improve but that's out of my control. ![]() And no, I haven't found any of the triple-checks that didn't match the others. And I earnestly hope I never do. |
|
|
|
|
#190 |
|
Serpentine Vermin Jar
Jul 2014
3,313 Posts |
I just whipped up a (ugly, but works) query to find factors that had been self-verified by the same person more than once. Not sure why that happens, but yeah... weird.
The record breaker is: M32629271 32 results from the same user. M38973653 isn't far behind with 30. I think I can weed out some additional duplicates that may just be data errors (I think some got merged in multiple times, so I should filter by distinct shift counts as well. So I'll have some adjustments I'm sure. EDIT: Nope, those #'s still stand. Different shift counts for each run. Just missing the date they came in... some of those might be in the old logs we recently started surfacing in the report, some might not. Last fiddled with by Madpoo on 2015-05-01 at 09:03 |
|
|
|
|
#191 |
|
Romulan Interpreter
Jun 2011
Thailand
3×3,221 Posts |
Well... it is a bit (a bit more?) of paranoia, yes. I told you that my worst nightmare is that somebody finds a prime and when checking the history I will find out that I had run that exponent and reported a wrong residue
![]() But the most of the motivation is pragmatic: when running two tests in the same time, a wrong residue is spotted immediately, and the calculus (for both of the cards) resumes from the checkpoint file. This saves not only the time would take to finish a single test, just to find out later that it had a mismatch. Think about it, assuming the mismatch was in the middle, this saves two LL tests. Assume a first test is run, then a DC is run and it does not match, generally the guy who runs the DC does not have the residues files, and he runs a complete DC, and eventually, if mismatched, the TC (and QC?) has to be run. By running two cards in parallel, I avoid all this mess. If you want to run a TC on my all self-DC exponents, nothing is loss, and you are welcomed to do it. We make sure that the project does not miss primes, we do about the same amount of work (well, here there is endless discussion about the probability of errors), and I don't have nightmares ![]() The only issue is when the guy doing the DC is a cheater. You can't get two wrong residues the same, in a row, only if you are cheater, or totally stupid. I suppose none of them are my case. Last fiddled with by LaurV on 2015-05-01 at 13:47 |
|
|
|
|
#192 |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
263616 Posts |
|
|
|
|
|
#193 | |
|
Serpentine Vermin Jar
Jul 2014
3,313 Posts |
Quote:
![]() In the realm of someone deliberately cheating, I *can* imagine someone thinking "my system is awesome, glitch free, surely they don't need to be double-checked" and in their mind they're like "I might as well submit a matching double-check and save someone else from doing it". Yes, that's a boogeyman in the closet, because (I hope) they don't exist. :) The other idea was that maybe there was a glitch somewhere on the server side which resulted in an exponent being added twice (having different shift counts makes that highly unlikely as well). Triple-checking smaller exponents like I did with the < 1M stuff was just for fun. In my mind there was a wonder about "maybe some old version was buggy...let's see if the latest version agrees with previous runs". That's also why I'm looking at all the results where the first check was done by v17, even if a run by v18+ matched. As you've mentioned, if someone was just deliberately trying to screw things up, they'd do things to avoid detection. And maybe I'm just paranoid. LOL... part of my real job is making sure a bunch of websites run non-stop. 100% uptime is my goal (an unrealistic one... honestly I'd be happy with 99.99%). Downtime = lost revenue, so multiple layers of redundancy. There's a plan B, plan C and plan D for everything I could think of happening. So maybe all of that made me more cautious? Anyway, in the end I'd be happy enough if the automated assignment stuff would simply avoid handing out a double-check when the same user did the first one, from here on out. Otherwise it'll still nag at me and I'll check again in a year for new ones and triple-check those. ![]() Oh, also, unless I've missed some conversations over the years, I didn't see any discussion about validating past results that seemed weird, besides this very thread. And while the main thrust was about exploring self-verified stuff, I've triple-checked a few other oddities here and there. Has there been a concerted effort in the past I missed, besides ATH checking stuff out using the publicly available data? If you read WAY back to the beginning of this thread you'll see people making arguments on either side of the idea. |
|
|
|
|
|
#194 |
|
"Victor de Hollander"
Aug 2011
the Netherlands
23·3·72 Posts |
You can never be 100%, unless you factor or TC all the exponents. Do we really want/need that kind of security? No, we can live with 99.99999....%
I agree with you the first time test and DC should normally be done by different computers/people. |
|
|
|
|
#195 |
|
"/X\(‘-‘)/X\"
Jan 2013
2×5×293 Posts |
My three exponents all matched.
|
|
|
|
|
#196 |
|
Serpentine Vermin Jar
Jul 2014
3,313 Posts |
|
|
|
|
|
#198 | |
|
Serpentine Vermin Jar
Jul 2014
3,313 Posts |
Quote:
The large # of LL work by one user/group also looks like some automated thing to distribute work to a computer lab or something went wrong and the same exponent was sent out to a bunch of machines at once by mistake. It's kind of weird that someone chose that one exponent to do so much P-1 work, but whatever makes people happy.
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Double checks | casmith789 | PrimeNet | 7 | 2015-05-26 00:53 |
| Help doing some quadrup1e+ checks | Madpoo | Data | 28 | 2015-04-06 17:01 |
| Double checks | Rastus | Data | 1 | 2003-12-19 18:20 |
| How do I get rid of the Triple Checks?? | outlnder | Lounge | 4 | 2003-04-07 18:06 |
| Double-checks come in pairs? | BigRed | Software | 1 | 2002-10-20 05:29 |