![]() |
[QUOTE=axn;394511]All exponents [B][I][U]below[/U][/I][/B] (I bolded, underlined & italicized for added effect :smile:)[/QUOTE]
I was confused what y'all were talking about, but I see now you're just being super precise (which I should appreciate, but I missed it). So I should say "All exponents below *or equal to* blah blah have been double-checked" I guess that's one thing in favor of using the language "up to" which is inclusive. I'm looking at the milestone page and with all the stuff in there for the n-millionth stuff it's a little cluttered as time goes on, so maybe I'll try to organize it better... I'll make the change then. FYI, looking at the database to try and figure out milestone dates is a little more daunting since triple-checks or other things are sometimes done, so I can't just look at the last date of a result in a certain exponent range. I'll have to do a little SQL magic to work out the actual date a single or double-check came in. The smaller exponents won't necessarily have that info in the v5 database either...maybe in the v4 database. :) |
[QUOTE=Madpoo;394517]
So I should say "All exponents below *or equal to* blah blah have been double-checked"[/QUOTE] This is why it might be easier to just say "all Mersenne numbers with less than ten million digits have been double-checked". This is not normally an issue for the XX million milestones, as "all exponents below XX million" also means "all exponents below or equal to XX million" simply because XX million will never be prime. [QUOTE=Madpoo;394517]I'm looking at the milestone page and with all the stuff in there for the n-millionth stuff it's a little cluttered as time goes on, so maybe I'll try to organize it better... I'll make the change then.[/QUOTE] At one point, I had tried to make a graphical timeline of the milestones, but it got a little unwieldy. I still think that would be a cool way to visualize the different milestones, rather than a long, boring list. At the very least, the milestones should probably be separated into three lists: prime discoveries, first-time test milestones, and double-check milestones. [QUOTE=Madpoo;394517]FYI, looking at the database to try and figure out milestone dates is a little more daunting since triple-checks or other things are sometimes done, so I can't just look at the last date of a result in a certain exponent range. I'll have to do a little SQL magic to work out the actual date a single or double-check came in. The smaller exponents won't necessarily have that info in the v5 database either...maybe in the v4 database. :)[/QUOTE] I do not believe v4 kept the dates; at least they were not imported into v5. I don't remember seeing dates in the database until v5 began beta-testing in mid-2008. |
[QUOTE=ATH;394443]Checking the milestones list against some old status files...[/QUOTE]
Thanks, that was good info. The current database doesn't have date stamps for older results (pre-2007/2008'ish) so it's hard to nail the dates down. I confirmed the missing double-check stats with the ones you had, plus a few more. I don't have any dates on older double-check milestones (3M, 7M, 11M and 12M). I'll have to look through the single-check stats later and see if I can't fill in some of those... seems like I should be able to find them for 21M and up since those were all post-2007. Not much I could do for the single-check milestones missing before that (3M, 7M, 11M, 13M, 14M, 18M and 19M). The milestones don't always progress smoothly... there are several instances where one smaller exponent held up 2 or more 1M ranges because it was just so slow to finish. You noticed that when a <32M exponent checked in which officially finished everything up past 37M in one swell foop. There's always going to be those crazy outliers I guess. EDIT: Okay, I verified your #'s and there were just a few minor corrections, just a day or two on a couple. You kept good records! I was able to look back as far as the single-check milestone for 24M but further back than that takes us into 2008 which was in the midst of the v5 database update, so date stamps on results before then aren't available. Not bad though to go back that far. I'll try and streamline the milestone page...at least make it easier by grouping the milestones into sections. Maybe graphing or something, but I don't know how useful that would be since any graph is going to be sporadic with one slow result holding up millions of milestones. :) Maybe over the course of 6-7 years you'd get an idea of the trend line... that graph a few posts up is probably the best you'd expect. |
[QUOTE=Madpoo;394520]I'll try and streamline the milestone page...at least make it easier by grouping the milestones into sections.[/QUOTE]
Okay... milestone page updated (for the "older / lower profile" stuff). Rather than have it all together in one chronological lump of stew, I broke it into the single-check, double-check, and then the other miscellaneous stuff. It has dates filled in for some of those older things, wherever possible. If anyone else out there on the interwebs happens to have dates for those other things that they happened to capture, I can incorporate them. Ongoing, maybe it'll be good to put the latest n-millionth single/double-check milestones in that first section so they're more prominent, and then demote them to the older section when the next one is reached. Kind of been doing that, it seems, for the single-check milestones already, so nothing new there. I can start that with the double-checks when we hit 34M. :smile: |
Hehe, "Double time checks". Sounds like a musical beat rate measurement.
And we can add "Countdown to double checking all exponents below 34M: xx" and automatically bump it up to the next million as each milestone is reached. |
[QUOTE=Madpoo;394662]Okay... milestone page updated (for the "older / lower profile" stuff).[/QUOTE]
I know it is redundant information but maybe add this line to the misc section?: 2010-12-25 All exponents up to 33,219,253 (10 million digits) tested at least once. Maybe change the million digit to similar text to the 10M digit: 1998-12-26 All exponents up to 3,321,917 (1 million digits) tested at least once. and you could also add this approximate milestone: 2001-Feb/Mar All exponents up to 3,321,917 (1 million digits) double-checked. Based on the status file information I had (unless you can see more on the server): January 28th, 2001: All exponents below 3,210,800 have been tested and double-checked. March 25th, 2001: All exponents below 3,502,500 have been tested and double-checked. |
[QUOTE=Madpoo;394662] If anyone else out there on the interwebs happens to have dates for those other things that they happened to capture, I can incorporate them.[/QUOTE]I have some. I will get them dug out and sorted when I have a bit of time.
|
[QUOTE=retina;394664]Hehe, "Double time checks". Sounds like a musical beat rate measurement.
And we can add "Countdown to double checking all exponents below 34M: xx" and automatically bump it up to the next million as each milestone is reached.[/QUOTE] LOL... whoops! I cut and paste that line from the "First Time" and just changed "First" to "Double". Yeah... I'll use the "it's late, I'm tired" excuse. Fixed to just say "Double checks" EDIT: Oh, and I can add a countdown to 34M being double-checked, but I wasn't sure if they'd all been assigned yet. I did a quick count and there were ~650 assignments in the 33M-34M range, but I'd have to see if there were any unassigned DC's in that range. I have the code to add that check to the page, I just need to modify and confirm. I think after the discussions about some of these recent n-millionth milestones, there seemed to be thought that it was a little target-rich for poachers... I know I got sucked in myself. Of course there's nothing to stop someone from checking it out for themselves on the exponent reports for stuff in a certain range. |
[QUOTE=Madpoo;394662]Okay... milestone page updated (for the "older / lower profile" stuff).
Rather than have it all together in one chronological lump of stew, I broke it into the single-check, double-check, and then the other miscellaneous stuff.[/QUOTE] Excellent work. Gives the page a nice, clean look.:smile: |
[QUOTE=Madpoo;394674]LOL... whoops!
I cut and paste that line from the "First Time" and just changed "First" to "Double". Yeah... I'll use the "it's late, I'm tired" excuse. Fixed to just say "Double checks" EDIT: Oh, and I can add a countdown to 34M being double-checked, but I wasn't sure if they'd all been assigned yet. I did a quick count and there were ~650 assignments in the 33M-34M range, but I'd have to see if there were any unassigned DC's in that range. I have the code to add that check to the page, I just need to modify and confirm. I think after the discussions about some of these recent n-millionth milestones, there seemed to be thought that it was a little target-rich for poachers... I know I got sucked in myself. Of course there's nothing to stop someone from checking it out for themselves on the exponent reports for stuff in a certain range.[/QUOTE] Thanks to UncWilly and some additional milestone info, I was able to fill in some add'l details. Right now I'm just missing data on the 21M and 23M single-check milestones. Thanks to the wayback machine at archive.org I was able to narrow those down to a pretty broad range within 1-3 months, but I'm leaving them out for now. I was also able to narrow down UncWilly's date ranges on a couple of them to a specific day using the wayback. Estimated dates are denoted with an asterisk. I also added in some info on 100M digit numbers... curious thing though, the very first LL test to check in was for M332197123 but it had a non-zero error-code. Because of that, I actually skipped to the first check-in with an error code of zero, but just keep an eye on this one...when it gets verified at some point, it would officially become the first 100M digit exponent completed. I felt bad including it now since it's "suspect": [URL="http://www.mersenne.org/M332197123"]http://www.mersenne.org/M332197123[/URL] We also have a grand total of 2 double-checked 100M digit exponents. The first one, however, was checked and double-checked by the same account, and both checked in at the same time. Different shift counts, so it should be okay but still worth mentioning. I noted it, but just to be fair I included the other one which was independently verified by different users. |
It appears to me all remaining first LL tests with exponent less than 57M are now assigned.
|
[QUOTE=cuBerBruce;395039]It appears to me all remaining first LL tests with exponent less than 57M are now assigned.[/QUOTE]
Cool. Once we get past the < 55M and < 56M milestones I could just alter that for the 57M countdown. If I read the comments right, it sounds like people generally like the milestones, and once the grandfathered assignments start to die off, the poaching issue should become less relevant? You won't have an assignment from 2013 chunking along super slow and "holding stuff up", in other words. |
[QUOTE=Madpoo;395203]If I read the comments right, it sounds like people generally like the milestones, and once the grandfathered assignments start to die off, the poaching issue should become less relevant? You won't have an assignment from 2013 chunking along super slow and "holding stuff up", in other words.[/QUOTE]I will suggest again, that after the number of outstanding exponents falls below some number (somewhere between 10 and 25 works for me) and/or the projected time to clear falls below 2 months that the display is changed as below:
[LIST]Countdown to first time checking all exponents below 56M: [COLOR="Red"]<25[/COLOR] (Estimated completion: [COLOR="Green"]<2 months[/COLOR])[/LIST] Maybe it could be incremented from 25 to 10 and from 2 months to 1. This would satisfy most people without putting too much temptation out there when it gets down this low. A determined poacher could find out what is left, but if there are 3 left and the see <25 they are less likely to jump on them. |
[QUOTE=Uncwilly;395267]A determined poacher could find out what is left, but if there are 3 left and the [sic] see <25 they are less likely to jump on them.[/QUOTE]
Why try to artifically limit knowledge? In less than a month this issue should go away. |
I also believe we should just not touch it until the new rules come out. 10 months ago this might have been a valid discussion but for now I think we're better off seeing how much of this fuss about poachers is justified. For all we know the problem will go away entirely. If it doesn't we can wake this discussion up.
Maybe we can add a milestone: "Number of days without talking about the milestones" |
[QUOTE=chalsall;395269]Why try to artifically limit knowledge? In less than a month this issue should go away.[/QUOTE]
Unless the code for extended time changes there are a number of exponents that will be around for another 3-4 months if you do the math. M[URL="http://www.mersenne.org/M55861261"]55861261[/URL] is a good example that could sit around for a long time if it is not finished. |
[QUOTE=chalsall;395269]Why try to artifically limit knowledge? In less than a month this issue should go away.[/QUOTE]Before Madpoo put the countdowns up, the knowledge was even more limited. I was thinking that there might be a balance that feeds the progress freaks (me), but does not feed the poachers.
|
[QUOTE=TheMawn;395272]
Maybe we can add a milestone: "Number of days without talking about the milestones"[/QUOTE] :tu: haha good one! |
[QUOTE=flagrantflowers;395273]Unless the code for extended time changes there are a number of exponents that will be around for another 3-4 months if you do the math. M[URL="http://www.mersenne.org/M55861261"]55861261[/URL] is a good example that could sit around for a long time if it is not finished.[/QUOTE]
Not sure why that would happen. That is a Cat1 exponent, so yes, I suppose someone (a trusted user, to boot!) could sit on it for 90 days and then it could be recycled, but I doubt that would happen much more than once before someone would get the test done. |
[QUOTE=Uncwilly;395277]Before Madpoo put the countdowns up, the knowledge was even more limited. ...[/QUOTE]
Oh sure, blame me. LOL :smile: I guess I did do that though, add a little countdown with a link directly to the report page that shows the exponents in question. The point could be made that I made it *too* easy for someone to find the slowpokes and "do something". On the other hand, anyone could do an exponent search within a range, looking for the same thing, and find it. It's just not as obvious and easy unless you knew what to look for. There's probably a dozen different things I could think to try and make it harder on would-be poachers...the trick is doing enough to keep honest people honest, but also not discourage people who just like to know how it's doing and aren't the poaching type. Some ideas are to mask some of the dates like "Last update" so there's less clues if someone's fallen behind. Or if an assignment goes past it's expected completion by a certain time, "hide" it somehow from the list of exponents. All of the ideas I can think of though are kind of a bummer and fudge the reports a bit, just in an attempt to stop a few people, but maybe it's better to just suffer the occasional bad poaching job so we still have accurate reports? After all, poaching happens, but it's not *that* big a deal in the grand scheme of things, and nothing would stop it entirely unless we just started rejecting results that don't have a valid assignment ID. And that runs a little counter to the openness of the project too. And that's above my pay grade anyway. I just tinker with the website and look at a few stats in the data at George's discretion. :) |
I think that "first time checks" are a non-issue with regards to "poaching" because there is no loss in productivity. But "double time checks" :razz: are a different matter. However trying to prevent bad behaviour by hiding details is the wrong approach IMO. A better approach (IMO) is to either just ignore it, or publicly name and shame. You'll never stop it no matter what you do but at least a little bit of public pressure may have a more desirous outcome.
|
[QUOTE=retina;395303]I think that "first time checks" are a non-issue with regards to "poaching" because there is no loss in productivity. But "double time checks" :razz: are a different matter. However trying to prevent bad behaviour by hiding details is the wrong approach IMO. A better approach (IMO) is to either just ignore it, or publicly name and shame. You'll never stop it no matter what you do but at least a little bit of public pressure may have a more desirous outcome.[/QUOTE]
Well, whatever the case, I did some exploring and mocked up a couple more things for the milestone page. Sure enough, everything up to 57M has been assigned as a first-time check so that's in there. I also added 57-58M and up to 34M double-checks, but not all exponents are assigned. I added a little note on those to indicate no ETA is available since there are some unassigned numbers, along with a count of just how many need some lovin'. Maybe that would encourage people to find the unassigned ones and try to get assigned to them, or at least poach those instead of something already assigned, right? :) There's just no good way to find out which numbers in a range haven't been factored, having already been tested, and aren't assigned to anyone. There's no report on the site for that kind of thing. Anyway, if you want to see how those look, I didn't make them live on the normal page, but you can check 'em out here: [URL="http://www.mersenne.org/report_milestones/default.mock.php"]http://www.mersenne.org/report_milestones/default.mock.php[/URL] I kind of feel like those milestones in progress should be a table, not an unordered list. I think it'd help with the formatting... I might try that out later but I'll leave it be for now. |
[QUOTE=NBtarheel_33;395296] That is a Cat1 exponent, so yes, I suppose someone (a trusted user, to boot!) …[/QUOTE]
This is not a trusted user as this was assigned long ago. I'm not saying the person could sit on it for 90 days (this is a grandfathered exponent so 90 days does not apply) so much as progress can be slow so that it does not expire until the extension code expires. 85.3*3.33+365=649 days or 220 days from today. |
[QUOTE=Madpoo;395305]Anyway, if you want to see how those look, I didn't make them live on the normal page, but you can check 'em out here:
[URL="http://www.mersenne.org/report_milestones/default.mock.php"]http://www.mersenne.org/report_milestones/default.mock.php[/URL][/QUOTE]Yeah, that is good. I still question the point of the ETA, it is completely meaningless. The statement about the number of unassigned exponents is good. |
[QUOTE=retina;395303]I think that "first time checks" are a non-issue with regards to "poaching" because there is no loss in productivity. [/QUOTE]
+1. Either the assignee or the poacher, whoever finishes last, will be credited with DC, and the exponent clear faster. There could be some arguing if a prime turns out... but doh... |
[QUOTE]Countdown to proving M(37156667) is the 45th Mersenne Prime: 49,999[/QUOTE]
Less than 50,000 to go. Hurray! (Still a long way to go, I know.) [QUOTE=flagrantflowers;395273]Unless the code for extended time changes there are a number of exponents that will be around for another 3-4 months if you do the math. M[URL="http://www.mersenne.org/M55861261"]55861261[/URL] is a good example that could sit around for a long time if it is not finished.[/QUOTE] The owner of this assignment has finished a different LL assignment, [url=http://www.mersenne.org/report_exponent/?exp_lo=55738409&exp_hi=&full=1]M55738409[/url]. That assignment took him/her 423.7 days. Hopefully, this is a sign that his/her other assignments will not take too much longer. |
[QUOTE=cuBerBruce;395360]…That assignment took him/her 423.7 days. Hopefully, this is a sign that his/her other assignments will not take too much longer.[/QUOTE]
That's great, hopefully the rest of the grandfathered assignments will go as quickly but I have my reservations. |
[QUOTE=retina;395307]Yeah, that is good. I still question the point of the ETA, it is completely meaningless. The statement about the number of unassigned exponents is good.[/QUOTE]
[I]Countdown to first time checking all exponents below 56M: 18 (Estimated completion : 2015-05-24)[/I] Can't believe we can't clear 18 tests in less than 2 months. Don't get all his fuss about "poaching" anyway, for those who were around in the late 90's we just smile at this talk. |
[QUOTE=Gordon;395394][I]Countdown to first time checking all exponents below 56M: 18 (Estimated completion : 2015-05-24)[/I]
Can't believe we can't clear 18 tests in less than 2 months. Don't get all his fuss about "poaching" anyway, for those who were around in the late 90's we just smile at this talk.[/QUOTE] Yeah...there's 3 pesky <55M exponents in there too. Just those 3. Truth be told, I already tested them (spoiler alert: they weren't primes). I'm just waiting to check mine in until after the original assignee so mine will be double-checks. I also already ran 4 of the 55-56M numbers in there that haven't checked in for a while...again, just holding on to the results (and again, no primes, sorry). These were some stress tests for new hardware so I wanted to to knock out a few interesting ones. I also periodically do a triple-check of strange results I come across in the database like triple-checking some false positives or other weird things. I suppose once I run out of those oddities I'll probably just do regular double-checks or something for these stress tests...things that can finish in a few hours on a dual E5-2690 server with all 20 physical cores working on one exponent. :) 13.5 hours for a 33.9M exponent to be precise. The 18-core CPU's are still super expensive so we couldn't get those for our recent orders. Bummer. |
[QUOTE=Madpoo;395395]...things that can finish in a few hours on a dual E5-2690 server with all 20 physical cores working on one exponent. :) 13.5 hours for a 33.9M exponent to be precise.[/QUOTE]What is the efficiency like for that? I would have expected that after about the first 4 cores the remainder add very little. Perhaps you should be running 5 tests of 4 cores each instead of 1 test on 20 cores?
|
How much power does this baby consume, running all out?
|
[QUOTE=Madpoo;395302][...]All of the ideas I can think of though are kind of a bummer and fudge the reports a bit, just in an attempt to stop a few people, but maybe it's better to just suffer the occasional bad poaching job so we still have accurate reports?
After all, poaching happens, but it's not *that* big a deal in the grand scheme of things, and nothing would stop it entirely unless we just started rejecting results that don't have a valid assignment ID. And that runs a little counter to the openness of the project too. And that's above my pay grade anyway. I just tinker with the website and look at a few stats in the data at George's discretion. :)[/QUOTE] I agree with people who have suggested we should now just see if poaching is a thing of the past with the "favoured" work now governed by strict time limits, and I don't advocate doing anything against poaching right now either, [U]but[/U]: [LIST=1][*]Poaching has caused significant annoyance to participants in the past and its damaging effects should not be understated.[*]It's not accurate to state that rejecting poachers' results entirely would be the only possible measure against them. The results could be accepted on a delayed basis, waiting first for the assigned work to complete or expire.[/LIST] |
[QUOTE=Brian-E;395432]It's not accurate to state that rejecting poachers' results entirely would be the only possible measure against them. The results could be accepted on a delayed basis, waiting first for the assigned work to complete or expire.[/QUOTE]This may backfire (as I already stated in another thread) as many people submit results for the same exponent not aware that others before them have also done the same.
|
[QUOTE=retina;395433]This may backfire (as I already stated in another thread) as many people submit results for the same exponent not aware that others before them have also done the same.[/QUOTE]
That seems likely to happen, at least initially, yes. Whether that would be offset in the long run by the increase of productivity due to (1) poachers later cottoning on to the new situation and only doing assigned work (thereby not duplicating anything), and (2) a reduction in people leaving in frustration after having their assignments poached, is an open question to me. But your word "backfire" implies that the motivation for such a measure would be purely an increase in productivity across the project. I had other issues in mind, centering around the idea that everyone who wants to take part in the project should be guaranteed, as far as possible, that they will be assigned unique work which will be theirs to do and to contribute. |
[QUOTE=retina;395433]This may backfire (as I already stated in another thread) as many people submit results for the same exponent not aware that others before them have also done the same.[/QUOTE]
As far as I can tell, this only backfires against poachers. Let the knobs all work on the same exponent. I give zero hoots about their wasted effort. |
[QUOTE=retina;395398]What is the efficiency like for that? I would have expected that after about the first 4 cores the remainder add very little. Perhaps you should be running 5 tests of 4 cores each instead of 1 test on 20 cores?[/QUOTE]
Good question...not sure. I haven't run the benchmark on my most recent purchases, but on a similar system I got last year I let it go through the full P95 benchmark test and it does keep reducing the time per iteration all the way up through all 20 cores. Adding hyper-threading cores gets very little benefit for LL tests as one would expect. As for how much power it uses, one of them just finished a little bit ago. Power use when it was running was averaging 300W and now that it's not doing anything, average use is just 130W. |
[QUOTE=Madpoo;395481].....
As for how much power it uses, one of them just finished a little bit ago. Power use when it was running was averaging 300W and now that it's not doing anything, average use is just 130W.[/QUOTE] Thanks! That's a lot of bang for the watt. Lots of cores for that amount of power. |
[QUOTE=cuBerBruce;395360]Less than 50,000 to go. Hurray! (Still a long way to go, I know.)[/QUOTE]
Right around a year at our present pace. An interesting (if not overly ambitious) goal would be to clear this milestone by the end of 2015. |
[QUOTE=NBtarheel_33;395495]Right around a year at our present pace. An interesting (if not overly ambitious) goal would be to clear this milestone by the end of 2015.[/QUOTE]
It went back up. 50,247 right now. Must have been a few that expired, and new assignments aren't keeping pace with the expired stuff at the moment. |
New milestone added
Just an FYI, I went ahead and added the countdown to double-checking up to 34M on the milestone page, now that all exponents are assigned.
I guess we'll just keep on trucking with these minor things for now since they are kind of interesting after all. I'm kind of waiting on adding the countdown for 57M single checks until we finish off the 54-55M milestones. Those in the 54M range are stubborn... I can't remember who noticed it, but they were right, they're checking in frequently but just pushing out the estimated completion a little bit each time. If anything that just shows that the completion dates in the client are wildly inaccurate in some cases... like George said, if a machine is working at < 50% per day then it'll be way off. I suspect that may be the case...maybe a computer that's turned off nights and weekends or something. |
[QUOTE=Madpoo;395803]Those in the 54M range are stubborn... I can't remember who noticed it, but they were right, they're checking in frequently but just pushing out the estimated completion a little bit each time. If anything that just shows that the completion dates in the client are wildly inaccurate in some cases... like George said, if a machine is working at < 50% per day then it'll be way off. I suspect that may be the case...maybe a computer that's turned off nights and weekends or something.[/QUOTE]
Maybe a server calculated ETA? :) Maybe something like this: [URL="http://www.mersenneforum.org/showpost.php?p=388111&postcount=1548"]http://www.mersenneforum.org/showpost.php?p=388111&postcount=1548[/URL] |
[QUOTE=ATH;395806]Maybe a server calculated ETA? :) Maybe something like this:
[URL="http://www.mersenneforum.org/showpost.php?p=388111&postcount=1548"]http://www.mersenneforum.org/showpost.php?p=388111&postcount=1548[/URL][/QUOTE]There is a much simpler solution. Remove the useless ETA. [size=1][color=grey]I suspect I may have suggested this previously. Maybe more than once? [sub][sub][sub][sub]Hey, who broke my record?[/sub][/sub][/sub][/sub][/color][/size] |
[QUOTE=Madpoo;395561]It went back up. 50,247 right now.
Must have been a few that expired, and new assignments aren't keeping pace with the expired stuff at the moment.[/QUOTE] Hmmm? As far as I know this number should never be able to go up, unless it is a correction of an error. Certainly new assignments or expirations should not be able to let this number go up. |
[QUOTE=tha;395825]Hmmm? As far as I know this number should never be able to go up, unless it is a correction of an error. Certainly new assignments or expirations should not be able to let this number go up.[/QUOTE]
I agree. How did this number go up. Only way I can see how is if a number of results were errors or if the original figure was wrong to begin with. |
[QUOTE=flagrantflowers;395827]I agree. How did this number go up. Only way I can see how is if a number of results were errors or if the original figure was wrong to begin with.[/QUOTE]
After Madpoo reported seeing the number jump up, I checked and saw the number about the same as before (below 50,000). I guessed what Madpoo observed was some sort of glitch. But then in the last couple days, I saw the number increase from 49,3xx to 49,6xx. So not only why was there an increase, but why did I see the number increeasing at a different time than Madpoo? (BTW, I finished one of the <56M exponents a couple hours ago.) |
[QUOTE=tha;395825]Hmmm? As far as I know this number should never be able to go up, unless it is a correction of an error. Certainly new assignments or expirations should not be able to let this number go up.[/QUOTE]
Oh, you're right...at the time I was updating some info there and thinking about unassigned exponents so I got that stuck in my brain. But yeah, it could be that some double-checks weren't matching resulting in the need for triple-checks. But that'd be a heckuva lot. I just checked, and there are 1,733 exponents up to 37156667 that need triple-checking. Speaking of, I kind of thought it would be cool to have another assignment type available - "triple checking". Admittedly the available exponents for that type would be limited, but for exponents where a first and second time check didn't match, I think it'd be nice to know sooner rather than later which one was right. Otherwise I think the exponent goes back into the general pool for double-checks and it may still be a while before the double-check wavefront catches up. In my head I had an idea to see if there's a correlation between certain users/computers and bad results. The more triple-checks get done, the more data to work with... it came up because I was doing some specific triple-checking and noticed that the "loser" in some I was checking were all from the same user...enough so that I thought it might even make sense to look closer at others by that person and re-test others earlier even if they weren't already flagged as "suspect". But then my sample set was only about 3-4 triple-checks so I may be reading too much into it. :smile: EDIT: On reflection, some of those 1,733 may have reported an error during the first time-check but no second check has actually been done yet. When a double-check comes in, it may actually match. I could change my query to account for that but I'm too lazy right now. :) EDIT #2: I guess I'm not that lazy after all... it really is 1,733 needing triple-checks. That of course could change if some that haven't been double-checked yet result in the need for a triple-check. I even found a handful that need quadruple (or more) checks. Most or all of those seem to result from some duplicates from the v4 migration, where the same suspect result is showing up multiple times, so their matching but suspect result is showing up multiple times. I'm testing one or two to find out for sure. |
As far as I understand it the number of 'Countdown to proving .... is the ... Mersenne prime' numbers should go down only when a matching double check is in. So the need for a triple check or anything else should not relate to these numbers. I'd like to hear if the implementation is anything else.
|
[QUOTE=Madpoo;395929]
Speaking of, I kind of thought it would be cool to have another assignment type available - "triple checking". [/QUOTE] Or perhaps a more general case of "Suspicious Results?" I actually really like that idea. If there ever was any value in saying that "Every Exponent up to XX,XXX,XXX has been tested at least once," then whatever value there was completely evaporates when we realize that however many hundred results have error codes. Frankly, getting the suspicious results cleaned up asap is of great value to anyone wondering about the integrity of their hardware. My recommendation would be that the reliability and confidence of a CPU should be high for it to be assigned suspicious results. Further, the reliability of the CPU should not be affected as harshly when it returns a mis-matched residue for "Suspicious Results" work. |
[QUOTE=tha;395934]As far as I understand it the number of 'Countdown to proving .... is the ... Mersenne prime' numbers should go down only when a matching double check is in. So the need for a triple check or anything else should not relate to these numbers. I'd like to hear if the implementation is anything else.[/QUOTE]
Well crumb, you're right. I had to look at where it's getting that count from (I didn't write it, so I wasn't terribly familiar except in passing). Basically, it's a simple count of how may exponents below that one are currently marked as "unverified" in the database. Things that can make a result go from "unverified" to something else are: - A factor is found for it, in which all previous LL results for that exponent are changed to a "factored" state - A successful double-check, and both the 1st/2nd checks are marked as "verified" - A double-check comes in that doesn't match -- I'd have to check the code to see what happens there, but one or the other could be changed to "unverified / suspect" (of which there are those 1,733 currently) - A matching triple-check comes in... the odd result(s) out gets marked as "bad" and the matching pair get marked as "verified" - A mismatching triple-check comes in which is basically treated the same as a mismatched double-check...i.e. some may get marked as "suspect". So... yeah, the only reason for that number to change is if the number of "unverified" results went up, which it shouldn't really. The only reason the # of unverified results would increase is if a double-check came in and somehow that new one *and* the old one both got marked suspect for some reason. I don't know how the code handles those situations though... like, does only a result with certain types of error codes get marked suspect? Or what if two seemingly error free runs have a mismatch, how would it know which one it thinks is suspect, or does it just consider them both as "unverified"? And yes, the SQL query does a "distinct" clause so it's not counting the same exponents multiple times in case of that. :) Anyway, this does deserve a little digging I suppose, although I wouldn't deem it critical...but it is curious. |
[QUOTE=Madpoo;395939]Well crumb, you're right. I had to look at where it's getting that count from (I didn't write it, so I wasn't terribly familiar except in passing)....
Anyway, this does deserve a little digging I suppose, although I wouldn't deem it critical...but it is curious.[/QUOTE] Turns out this has a relatively mundane explanation. The numbers I was looking at came from my mocked up milestone page with some additional info. And, not only that, but the count of checks to finish up double-checking the known Mersenne numbers is done in an unusual (to me) way, where it's taking the result of the previous count and adding it to the next one. Why that matters is that when we finished double-checking M44 recently, the code that tabulates M45's countdown was still adding in the result of the previous query on the page, which happened to be using the same variable name. Oops. Once I fix that you should expect to see the countdown for M45-M48 drop by whatever the previous countdown number was on the page. For the normal milestone page, that means it'll drop by 386 since that's the 34M double-check count right now. For the mocked up page that had a countdown to first-time checking to 58M, the difference was even higher. That also cleared up a mystery for me in the Mxx countdowns since the individual queries were only counting exponents between it and the previous prime, not the total from the "double-check up to" number. That may make the SQL query marginally faster for each one but when I was looking at those things just now it did have me wondering, thus I noticed the incrementing counter. Enjoy the fixed countdown for M45 and beyond...sorry for the muck up. :smile: |
[QUOTE=TheMawn;395937]Or perhaps a more general case of "Suspicious Results?" I actually really like that idea.
If there ever was any value in saying that "Every Exponent up to XX,XXX,XXX has been tested at least once," then whatever value there was completely evaporates when we realize that however many hundred results have error codes. Frankly, getting the suspicious results cleaned up asap is of great value to anyone wondering about the integrity of their hardware. My recommendation would be that the reliability and confidence of a CPU should be high for it to be assigned suspicious results. Further, the reliability of the CPU should not be affected as harshly when it returns a mis-matched residue for "Suspicious Results" work.[/QUOTE] There are some interesting things I'm seeing, regarding some users and a very high propensity for bad results. I wrote a query and limited it to accounts that have returned at least 100 LL results (to avoid some that have only returned a handful that were all bad, thus a very high percentage of bad/good results). There are 1027 accounts that have returned at least 100 LL checks and at least one of them wound up being bad. That's not terribly surprising for some of the prolific accounts like CurtisC which has 109,771 total and 63 bad ones. That's an awesome measure of quality of their work, just 0.06% error rate. On the other end of the scale we have a user who has checked in 121 results of which 80 were bad (66.12% failure rate). The most prolific bad results came from a user where 194 of their 1010 total were bad. Lower percentage-wise at 19.21%, but that's still 194 exponents that needed a triple-check somewhere along the way. I didn't look at v4 results or accounts since that's a little different to group together, but this was interesting enough. |
It may actually be better to look one level below, at user computers. For example, I know I have some top-notch enterprise level machines with ECC memory, and at the same time I have some crappy desktops, that reboot all the time (I actually took the latter category off-GIMPS, but I still have some better desktop grade machines crunching).
Furthermore, one of my computers developed a memory problem; and the fact that I started to receive Prime95 error codes, is how I became alerted to it. Replaced the memory, and the computer has been crunching error-free since then. Anyway, my point is, this sort of statistic may be more useful when tied to a computer rather than a user. |
[QUOTE=TObject;395949]It may actually be better to look one level below, at user computers. For example, I know I have some top-notch enterprise level machines with ECC memory, and at the same time I have some crappy desktops, that reboot all the time (I actually took the latter category off-GIMPS, but I still have some better desktop grade machines crunching).
Furthermore, one of my computers developed a memory problem; and the fact that I started to receive Prime95 error codes, is how I became alerted to it. Replaced the memory, and the computer has been crunching error-free since then. Anyway, my point is, this sort of statistic may be more useful when tied to a computer rather than a user.[/QUOTE] I suppose so... If I group by computer instead of user, there are some pretty bad ones. Worst is a computer with 109 out of 152 bad results (71.71%). I wonder if people with such a bad track record are actually aware of it? Perhaps not since they might not realize their results are bad until a triple-check much later. Unless they look at their account results periodically and see just how many are bad, they wouldn't know. |
[QUOTE=TObject;395949]Anyway, my point is, this sort of statistic may be more useful when tied to a computer rather than a user.[/QUOTE]
+1! (One factoral... :wink:) |
[QUOTE=Madpoo;395951]I suppose so...
If I group by computer instead of user, there are some pretty bad ones. Worst is a computer with 109 out of 152 bad results (71.71%). I wonder if people with such a bad track record are actually aware of it? Perhaps not since they might not realize their results are bad until a triple-check much later. Unless they look at their account results periodically and see just how many are bad, they wouldn't know.[/QUOTE] A little more detail on that particular bad computer... - Of the 109 bad results, 56 of those had a zero for the error code. - Of the 43 non-bad results: - 4 are still unverified, awaiting a double-check - 17 are verified okay (double-check matched) - 21 are suspect - some error code, but they haven't been double-checked yet - 1 had a factor found later, so who the heck knows, but there were 2 LL mismatched LL tests done...I have my guess which one was bad :smile: This particular computer checked in it's last result in July 2012, although the user was active with other computers up to May 2014 with a much better track record. EDIT: As you might guess, the 25 exponents this computer checked in that haven't been double-checked yet should be considered highly suspect... if it were me I'd figure out a way to bump these up the priority list so they're double-checked earlier than usual. Doing that on some grand scale where it takes a computer's failure rate into account would be pretty cool, but making that happen could be...interesting...from a coding point of view. |
[QUOTE=Madpoo;395954]
EDIT: As you might guess, the 25 exponents this computer checked in that haven't been double-checked yet should be considered highly suspect... if it were me I'd figure out a way to bump these up the priority list so they're double-checked earlier than usual. Doing that on some grand scale where it takes a computer's failure rate into account would be pretty cool, but making that happen could be...interesting...from a coding point of view.[/QUOTE] FYI, it seems like they get marked suspect when a double-check runs and it didn't match... the one with an error code seems like it's marked suspect? I don't know...just guessing. So really of those 25 unverified or suspect results, 21 of those have been double-checked and now need a triple-check. Only 4 have never been double-checked. I could share what those 4 are but then you'd see whose account it is I'm talking about. :smile: |
[QUOTE=Madpoo;395955]I could share what those 4 are but then you'd see whose account it is I'm talking about. :smile:[/QUOTE]
As my GF often tells me, "Do it yourself; it's a small job".... :wink: |
[QUOTE=chalsall;395957]As my GF often tells me, "Do it yourself; it's a small job".... :wink:[/QUOTE]
Yeah, TMI. :smile: Well, I just found out something else too... I tried to use the manual assignment page to pick a couple of these and test them to see if they're okay or not. Turns out it's hard to use that page to reserve a specific exponent for double-checking... George might weigh in on this in case I'm totally wrong, but it looks like when you manually assign exponents, it creates a new computer in your account called "Manual testing". If you look at your account and the CPU's, you'll see it in there along with another special CPU "v4_computers" if you've ever used an older client. That "machine" has certain fixed values that feed into the reliability assessments. Which makes sense because on that page, there's no way of knowing which actual machine will be doing the test. Some normally reliable *user* might put this work on a very unreliable *cpu*. Speaking of, when looking into that I saw that there already is the concept of computer reliability that happens when a CPU is requesting more work. So as usual George is way ahead on that. You can look at the CPU page in your account and see that reliability index. Now watch... I'll suggest that if a previous result that seemed error free turns out to be bad, it should update that index, and then I'll find out it already does that. :) (PS to George...the various thresholds for $MinExp probably need updating). |
[QUOTE=Madpoo;396025]George might weigh in on this in case I'm totally wrong, but it looks like when you manually assign exponents, it creates a new computer in your account called "Manual testing". If you look at your account and the CPU's, you'll see it in there along with another special CPU "v4_computers" if you've ever used an older client.
That "machine" has certain fixed values that feed into the reliability assessments. Which makes sense because on that page, there's no way of knowing which actual machine will be doing the test. Some normally reliable *user* might put this work on a very unreliable *cpu*. [/QUOTE] Turns out there's some code in the manual assignments that checks if an exponent is factored to at least 71 bits... the ones I'm trying to do are only factored to 70 bits (they're in the 40M-50M range). I kind of thought those had all been done up to 71 or 72 but I guess not entirely. |
[QUOTE=Madpoo;396033]Turns out there's some code in the manual assignments that checks if an exponent is factored to at least 71 bits... the ones I'm trying to do are only factored to 70 bits (they're in the 40M-50M range). I kind of thought those had all been done up to 71 or 72 but I guess not entirely.[/QUOTE]
Please do keep in mind the status of each candidate... No LL, no DC, matching LLs, factored. |
[QUOTE=chalsall;396040]Please do keep in mind the status of each candidate... No LL, no DC, matching LLs, factored.[/QUOTE]
Well, let me be more precise... it looks like there's some code that makes sure any double-check requests are factored to at least 71 bits, and first-time checks to at least 73 bits. That probably fairly represents most of what's out there for current assignments. In my case I was trying to manually assign some pretty specific ones in advance of the current batch of double-checks and I suppose the TF to higher bit levels just isn't there yet? No worries of course, it just had me scratching my noggin' to figure out why I was getting an error for my request...just a minor mystery. |
By the way, 2 of those last 3 exponents below 55M (the ones being done by user "Haoran") are progressing REALLY slow.
I was curious about their progress and it looks like over a 4 day period they progress about 0.4% - 0.5%. Since they're at ~65% done it's going to take them probably another 8-9 months. That 3rd one being done by user "Ollum98" is also progressing pretty slow... about 0.1% each time it checks in every 3-4 days. It's at 96.20% but still, at that rate we're still looking at 3-4 months to completion. These must really be running like only a few hours a day or something because their ETA's they're calculating when they check in are VERY wrong, but I think we established that previously. Thoughts from the crowd? I've already done my own LL tests on these 3 but just holding them as double-checks since I figured they'd be done sooner than the end of 2015...now I'm skeptical. It's just weird to me to take nearly 2 years for one LL test. If I did check in my results, these other runs would still be double-checks, but if anyone thought that's a bad idea I'll do nothing. :) It is an essentially meaningless milestone (first time < 55M done)... it'll also hold up the <56M and probably <57M first-time checks too. I think it'd be kind of lame if it held up the countdown to first time checks up to M48... I haven't thought out whether even these grandfathered assignments might actually be subject to expiration in the next 8-9 months? |
Holding them for DC is perfectly acceptable in my opinion. Give them the time that the rules say they should have, and then submit them as LL's and wait for their work to come in as a DC.
Just make sure that they don't get assigned to someone else in the meantime. |
[QUOTE=Madpoo;396132]I've already done my own LL tests on these 3 ... If I did check in my results, these other runs would still be double-checks, but if anyone thought that's a bad idea I'll do nothing.?[/QUOTE]I say check them in now. Productivity is not reduced and it could stop someone else doing another set of LL tests and wasting their time.
|
[QUOTE=retina;396136]I say check them in now. Productivity is not reduced and it could stop someone else doing another set of LL tests and wasting their time.[/QUOTE]
That's my inclination, but then I'm probably just annoyed at it taking so long. LOL On a brighter note, I'm doing checks on some of these "suspicious" results that were turned in by flaky computers. Two of them finished up in the past 24 hours... one double-checked okay, one was a "success" in that it proved the original result was in fact bad. Proof of my assertion that these bad computers really shouldn't be trusted. Good thing we double-check each exponent as a matter of course, but it'd be nice to find these possibly flaky first-time checks and do a quicker double-check on them. Odds of a missing prime lurking among the few hundred/couple thousand between M45 and M48 are unlikely but that'd sure be weird, and a little cool. Knocking out these suspicious results will help me sleep better at night anyway. If I start with the most obviously suspicious ones, there were less than 10 (machines that had done over 100 results of which > 70% are known to be bad). Once those are cleared I can broaden to some less likely but still highly probably bad results... at least 50 results by that computer where more than 50% are bad. Even then it's still 16 currently. I have to really start stretching it like > 10 results and > 50% failures to get to 36 possibles. Once I drop the failure rate to something like > 25% failures, that's when it gets to be a bigger pool... > 10 results of which > 25% are bad = 378 exponents. If we guessed 20-30% of those are also bad, then we're cooking. |
[QUOTE=Madpoo;396132]I haven't thought out whether even these grandfathered assignments might actually be subject to expiration in the next 8-9 months?[/QUOTE]
The one at 96.2% will expire ~ June 4th 2015 + 3.33 days for every extra percent done "plus a grace period if close to finished" (unspecified grace period). The one at 64.1% will expire ~ May 15th 2015 + 3.33 days for every extra percent done The one at 63.4% will expire ~ May 13th 2015 + 3.33 days for every extra percent done From what you said they are doing less than 0.3% per day (1% in 3.33 days), so they will eventually expire, except the one at 96.2% might just make it if it does 0.1% every 3 days and depending on the grace period. |
[QUOTE=retina;396136]I say check them in now. Productivity is not reduced and it could stop someone else doing another set of LL tests and wasting their time.[/QUOTE]
[QUOTE=Madpoo;396138]That's my inclination, but then I'm probably just annoyed at it taking so long. LOL[...][/QUOTE] I'll be as unpopular as I always am, but I'm still going to say this.:smile: Productivity is not the only consideration here. If I was the assignee doing the first time test slowly but surely, I'd be pretty annoyed if you pinched my assignment and relegated my work to a DC. I'd be embarrassed that someone had thought it necessary to do this, and I'd feel worthless as a GIMPS participant as a result. Every participant in this project is an individual. Some of us work with only one slow machine and have the client software running only part time. That does not mean we are any less enthusiastic than those of you who throw entire farmyards of hardware at the project. Taking part and meaning something is what motivates us. I agree with you, retina, that the assignment may well get poached by others anyway. But I don't like to see prominent participants here condoning such impatient behaviour. [end of sour rant] :alex::leaving: |
[QUOTE=Brian-E;396151]Productivity is not the only consideration here.[/QUOTE]
I hear, understand and appreciate what you're saying. But, at the same time, this project requires rather serious amounts of compute (obviously). This is why the new assignment rules were put in place -- to hopefully be able to let everyone meaningfully participate with the compute they're willing and able to contribute without holding up "milestones" nor being "stepped on". We only have a few more months before even the "grandfathered" assignments are dealt with -- either because the assignee completes, or the system automatically recycles. |
[QUOTE=chalsall;396159]I hear, understand and appreciate what you're saying. But, at the same time, this project requires rather serious amounts of compute (obviously).
This is why the new assignment rules were put in place -- to hopefully be able to let everyone meaningfully participate with the compute they're willing and able to contribute without holding up "milestones" nor being "stepped on". We only have a few more months before even the "grandfathered" assignments are dealt with -- either because the assignee completes, or the system automatically recycles.[/QUOTE] Thanks, and I agree with everything you write. Let's hope that the new assignment rules eliminate any excuse for pinching other people's work once and for all. |
[QUOTE=Brian-E;396167]Thanks, and I agree with everything you write. Let's hope that the new assignment rules eliminate any excuse for pinching other people's work once and for all.[/QUOTE]
Yeah, the new assignment rules seem to be having a positive impact on the newer ranges of exponents (for both first and second checks). It's only these grandfathered assignments I'm looking at. If it weren't for the grandfather setup, I'm pretty sure that almost all of the assignments still out there from before Feb 28, 2014 would have been expired by the new rules. The grandfather rules seem to be especially generous. Let me put it in a slightly different light... let's say we've double-checked all of the exponents up to M45 except for 1 solitary assignment that was handed out in 2013. For the purposes of this example, let's also say the machine with that assignment is working at the same pace as one of those real life ones... about 0.1% every 3 days, and we're estimating there about 4-5 months still to go. In this case we're not merely talking about a basically meaningless milestone like the n-millionth exponent, but one involving an important milestone like actually saying M45 is really M45. Would there be an objection in that case to just running it on a machine that can finish it in 13 hours compared to the 2+ years the assigned machine was taking? Or do we just wait? It's not an entirely theoretical question because it's bound to happen with some of the work out there. There are a 23 grandfathered assignments out there for exponents below 37156667. They range from 1.5% to 97.1% complete. All were assigned in Feb 2014, the last month where the grandfather rules are applicable. We know the ETA's from the machines themselves are wildly wrong when progress is so slow, but the last of that batch is supposedly going to finish in July 2015... I doubt it personally. :smile: In reality, double-checking up to M45 is still a ways off and it probably won't happen, but on the other hand, these very 3 exponents we're talking about will be blocking the milestone of at least single-checking everything up to M48, if we want to consider that a major milestone or not. There are 17 first-time checks below M48 that were assigned before the recycling rules. Again, at some point all of these grandfathered machines will expire anyway... or will they? The grandfather rules make it seem like some machines "close to finishing" but progressing at a snails pace may still be around for years. What I'm getting at is, would there ever be an acceptable reason to look at one of these grandfathered assignments and just apply the same rules that every other assignment since March 2014 is subject to, so as not to hold whatever milestone up, or just in the interest of moving things along in general? After all, a decision was already made to expire work assigned to slow machines...that's not in question. I'm just flipping it a bit and asking whether the grandfathered assignment rules are a little bit too lenient? I think it's a valid question in terms of old assignments blocking *major* milestones... I'll grant that completing a minor milestone isn't really a big deal. It's nice to tidy up a range but if you're not suffering from mild OCD it's really not that important. :smile: On a more fun note... The current record for "oldest active LL assignment" goes out to 332,210,191 assigned way back in Nov 2008: [URL="http://www.mersenne.org/assignments/?exp_lo=332210191&exp_hi=332210191&execm=1&exp1=1&extf=1&B1=Get+Assignments"]M332,210,191[/URL] No surprise since it's a 100M digit monster, but kudos to that user for chunking away at it with an ETA in 2016. 8 years of CPU thrown at one exponent, assuming that estimate is even close. The oldest assignment for something in the <100M digit range is 58,998,341 assigned in Nov 2012: [URL="http://www.mersenne.org/assignments/?exp_lo=58998341&exp_hi=58998341&execm=1&exp1=1&extf=1&B1=Get+Assignments"]M58,998,341[/URL] |
To put on the table, I find [URL="http://www.mersenne.org/assignments/?exp_lo=30000000&exp_hi=34131828&execm=1&B1=Get+Assignments"]this query[/URL] interesting.
|
[QUOTE=Madpoo;396188]Let me put it in a slightly different light... let's say we've double-checked all of the exponents up to M45 except for 1 solitary assignment that was handed out in 2013. For the purposes of this example, let's also say the machine with that assignment is working at the same pace as one of those real life ones... about 0.1% every 3 days, and we're estimating there about 4-5 months still to go.
In this case we're not merely talking about a basically meaningless milestone like the n-millionth exponent, but one involving an important milestone like actually saying M45 is really M45.[/quote] I'd say wait, there is no hurry. Look at it this way, 5 years from now will you care if the M45 milestone was completed in March 2015 or September 2015? [quote] The oldest assignment for something in the <100M digit range is 58,998,341 assigned in Nov 2012: [URL="http://www.mersenne.org/assignments/?exp_lo=58998341&exp_hi=58998341&execm=1&exp1=1&extf=1&B1=Get+Assignments"]M58,998,341[/URL][/QUOTE] By my understanding of the expiration rules, that exponent should already have been recycled. Do you want to investigate why, or would you like me to investigate? |
[QUOTE=Prime95;396191]By my understanding of the expiration rules, that exponent should already have been recycled. Do you want to investigate why, or would you like me to investigate?[/QUOTE]
What would be very much appreciated, by many, is to understand the actual recycling rules enacted by Primenet. Why, for example, is 33944569 still out after 107 days with no work done? |
[QUOTE=Madpoo;396132]
[snip] It's just weird to me to take nearly 2 years for one LL test. [snip] [/QUOTE] 2 years? Nothing, I remember when I were a lad exponent 79299719 [B][Mon Jan 03 05:18:24 2000][/B] Exponent started - factoring will go to 72 bits. [Wed Sep 06 02:25:51 2000] UID: nitro/liberator, M79299719 stage 1 is 100.000000% complete. [Fri Oct 06 02:58:47 2000] UID: nitro/liberator, M79299719 completed P-1, B1=780000, B2=12480000, WW1: 03B3AE1F [B]On a 533Mhz Celeron system, factoring and P-1 has taken 277 days[/B]. Exponent now switched to a 733Mhz P3 Fast forward a few years [B][Wed Aug 23 06:53:27 2006][/B] UID: nitro/m79299719, M79299719 is not prime. Res64: B2C51947EEBD3B05. Wc1: 30EAA708,54176722,01000100 6 years 8 months 20 days, you had to be dedicated in those days :grin: I still have the interim residue files generated every 1m iterations... |
[QUOTE=Madpoo;396188]Again, at some point all of these grandfathered machines will expire anyway... or will they? The grandfather rules make it seem like some machines "close to finishing" but progressing at a snails pace may still be around for years.[/QUOTE]
Not years but about 8-9 months at most. An exponent assigned Feb 28th 2014 which is 95% done will not expire until Dec 8th 2015 (not counting unspecified grace period). So by december we should see the last grandfathered exponents but probably a bit before. [QUOTE=Gordon;396195]6 years 8 months 20 days, you had to be dedicated in those days :grin:[/QUOTE] That's dedication but that was also a 24M digit exponent and in 2000 it was almost before the "opened" up for 10M exponents? I did a 10M digit exponent around 2003 on a 950Mhz Athlon, which took 9-10 months. |
[QUOTE=Gordon;396195]6 years 8 months 20 days, you had to be dedicated in those days :grin:[/QUOTE]
Or stupid... Never give the full result! I'll run the DC on this; will take about three days. |
[QUOTE=Madpoo;396188] these very 3 exponents we're talking about will be blocking the milestone of at least single-checking everything up to M48, if we want to consider that a major milestone or not.[/quote]
I do not consider that to be a major milestone. It means nothing. Doublechecks are huge because we get to say that the X million mersenne numbers between M43 and M44 are all composite with 99.9999...[SUB](20)[/SUB]...999% certainty. First time checks mean nothing. The LAST first time check for the M47 to M48 (for example) block is of no consequence because we can be pretty sure there are two thousand bad results mixed in. |
[QUOTE=Prime95;396191]
By my understanding of the expiration rules, that exponent should already have been recycled. Do you want to investigate why, or would you like me to investigate?[/QUOTE] Oops, I forgot this clause "Assignments are recycled when the exponent moves into the first category...". Although not explicitly states, this clause also applies to grandfathered assignments. |
[QUOTE=chalsall;396199]Or stupid... Never give the full result!
I'll run the DC on this; will take about three days.[/QUOTE] I've already logged my result in, remind me again what good does the full residue do anyone? 79299719 was chosen as it was the largest exponent that the software could test at that time. |
[QUOTE=Prime95;396191]I'd say wait, there is no hurry. Look at it this way, 5 years from now will you care if the M45 milestone was completed in March 2015 or September 2015?[/QUOTE]
I guess I'm fine with waiting... I don't mean to but I know I'm just stirring the pot... I shouldn't do that. :smile: [QUOTE=Prime95;396191]By my understanding of the expiration rules, that exponent should already have been recycled. Do you want to investigate why, or would you like me to investigate?[/QUOTE] I poked around at a few of the expiration sprocs... looks like there's the one that runs and specifically takes care of non LL work which is pretty straightforward. The other one that puts exponents back into the "available" pool is the one I spent a little time today noodling at, but I haven't quite nailed down the different things involved. I did notice that it won't expire an assignment if it matches all of the other requirements for expiration, but the exponent itself is still pretty high up there (40M higher than the latest critical range, like 51M for a first time LL test). I get the reason for that, since exponents that much higher up aren't really good candidates for reassignment anyway since they're not going to be handed out soon even if they were available. My guess was that 58,998,341 was still making progress and being regularly checked in, so the section that applies to grandfathered work is allowing it to progress. That was the part I need to stare at longer, with the various date limits (@lim1 - @lim3) and exponent limits (@exp1) going on and then figure out just where the thresholds would be for a given day. I suppose the exponent is just beyond where that latest critical limit is... is that meant to avoid expiring assignments that are being worked on but are still outside the current range of new assignments? After all, I think there are still a few available exponents smaller than that one, so it's just giving that computer a little extra chance to pick up the pace? EDIT: Ah, yes... I just let that sproc run the section in question to see what values it came up with for @exp1... it's 58857158. So I guess anything higher than that wouldn't necessarily be expired since they're not in that 'critical' range as of this moment. If the exponent hadn't checked in for > 180 days or hadn't even started yet, it would be expired, but that particular one is still (slowly) doing things. That's some inside baseball stuff there, but for everyone else, let it be known that even under the grandfathered rules, there are still some allowances made where expiration won't happen until those exponents are actually going to be reassigned to someone else in the short term. Otherwise that computer gets to hang on and hopefully get it done. |
[QUOTE=Madpoo;396203]I don't mean to but I know I'm just stirring the pot... I shouldn't do that. :smile:[/QUOTE]
ALWAYS do that! That's how progress happens! |
[QUOTE=Gordon;396202]I've already logged my result in, remind me again what good does the full residue do [/QUOTE]
It's a little complicated. No worries, this is being managed. |
[QUOTE=Gordon;396202]I've already logged my result in, remind me again what good does the full residue do anyone?[/QUOTE]Yay, free credits. With the residue I don't have to bother with running the whole test, I'll just post a DC immediately.
[size=1][color=grey]Credits (i.e. arbitrary numbers stored in a database) are very important and must be sought after no matter who it hurts or what it costs.[/color][/size] |
[QUOTE=retina;396206]Yay, free credits. With the residue I don't have to bother with running the whole test, I'll just post a DC immediately.
[size=1][color=grey]Credits (i.e. arbitrary numbers stored in a database) are very important and must be sought after no matter who it hurts or what it costs.[/color][/size][/QUOTE] Ah...good job I have all the interim residue files that you can submit yours for comparison with :tu: |
[QUOTE=chalsall;396199][QUOTE=Gordon;396195][Wed Aug 23 06:53:27 2006]
UID: nitro/m79299719, M79299719 is not prime. Res64: B2C51947EEBD3B__. Wc1: 30EAA708,54176722,[B]01000100[/B] [/QUOTE]Or stupid... Never give the full result! I'll run the DC on this; will take about three days.[/QUOTE] Maybe it's a good thing, because at least the error code is clearly visible, which for some reason [URL="http://www.mersenne.org/report_ll/?exp_lo=79299719"]was not recorded[/URL] to the database. |
[QUOTE=Batalov;396209]Maybe it's a good thing, because at least the error code is clearly visible, which for some reason [URL="http://www.mersenne.org/report_ll/?exp_lo=79299719"]was not recorded[/URL] to the database.[/QUOTE]
Hmm...not sure why it doesn't show up there, but in the database itself it does show that error code. Must be some parsing thing to look at later on. |
[QUOTE=Batalov;396209]Maybe it's a good thing, because at least the error code is clearly visible, which for some reason [URL="http://www.mersenne.org/report_ll/?exp_lo=79299719"]was not recorded[/URL] to the database.[/QUOTE]Nor was the date of 2006.
|
[QUOTE=retina;396218]Nor was the date of 2006.[/QUOTE]
That's because it took me "a while" to check it in :redface: "A while" - an undetermined period of time, to cover up a multitude of things, usually forgetting to get around to doing something... |
Check them in. Just my twopence.
|
[QUOTE=Madpoo;396203]That's some inside baseball stuff there, but for everyone else, let it be known that even under the grandfathered rules, there are still some allowances made where expiration won't happen until those exponents are actually going to be reassigned to someone else in the short term. Otherwise that computer gets to hang on and hopefully get it done.[/QUOTE]
But... That still doesn't explain why [URL="http://www.mersenne.org/assignments/?exp_lo=33944569&exp_hi=33944569"]33944569[/URL] has not been recycled. This is not a grandfathered assignment, nor has any work been done on it, nor has it even been checked in recently. Why has it not been recycled? According to the stated [URL="http://www.mersenne.org/thresholds/"]assignment rules[/URL], it should have been. All I'm asking is that what is promised is actually what is done. Or, alternatively, that what is actually done is communally known. |
[QUOTE=chalsall;396246]But... That still doesn't explain why [URL="http://www.mersenne.org/assignments/?exp_lo=33944569&exp_hi=33944569"]33944569[/URL] has not been recycled.[/QUOTE]
Just an idea: Maybe 108 days ago 33.9M was in cat 3 so it has 180 days to start? I assume the category at assignment counts and not the category the exponent moves to after assignment? |
[QUOTE=ATH;396252]Just an idea: Maybe 108 days ago 33.9M was in cat 3 so it has 180 days to start? I assume the category at assignment counts and not the category the exponent moves to after assignment?[/QUOTE]
Possible, but I seriously doubt it. Happy to be proven wrong (I don't currently spider the thresholds page, so I can't say for sure). |
[QUOTE=chalsall;396254]Possible, but I seriously doubt it.[/QUOTE]
I doubted it too, but the cat 3 threshold on 11-08 was 33944558. Maybe we need a web page that displays that info.... |
[QUOTE=Prime95;396258]I doubted it too, but the cat 3 threshold on 11-08 was 33944558.
Maybe we need a web page that displays that info....[/QUOTE] Thanks George! My bad. A web page showing that data would be helpful. And, perhaps, we should consider increasing the boundries for the lower categories for DCs by a little bit? Clearly the Cat 1 workers are eating quickly! :smile: |
I added the date feature to the assignment rules page. You have to know the hidden parameter to use the feature:
[url]http://www.mersenne.org/thresholds/?dt=2014-11-09[/url] |
(Slate this comment for removal)
|
[QUOTE=Prime95;396265]I added the date feature to the assignment rules page. You have to know the hidden parameter to use the feature:[/QUOTE]
Thanks George! Very helpful for the OCDs amongst us who don't have access to the raw DB. :smile: May I suggest we start a discussion (either here, or on the original "new assignment rules" thread) about increasing the deltas on the different ranges for DC? I'd initially suggest something like Cat 1: 5000, Cat 2: 15000, Cat 3: 50000. The whole point of the new rules was to lessen the motivation for "poaching". Clearly this and other examples shows that Cat 3s may slide into the Cat 1 range well before the candidates are soon due for recycling, thus unintentionally creating a bottleneck. Actually, taken at the extreme, there could be an interesting self reinforcing situation wherein almost the entire Cat 1 range is actually old Cat 2s and 3s, while the fast / dedicated machines mostly clear out Cat 2s and 3s. We should really aim for a balance such that if a candidate moves into Cat 1 that it's just about to be recycled by the system. Thoughts? |
| All times are UTC. The time now is 22:21. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.