![]() |
[QUOTE=ATH;427669]If "Hours per day this program will run" is set to 24hours but the user is only running it 2 hours per day: 1000* 2/24 = 83
That is why this is a poor metric for speed since it is so dependent on that variable being correct.[/QUOTE] My current line of thinking is to look at their (GHz-days / # of workers) over the past 90 days. The way I figure it, a 67M LL test will take ~ 170 GHz-days. And since the cat 1 LL rules say it should be done in 90 days, we should be looking for systems that have done at least that much (per worker) in the past 90 days... right? Otherwise why would we assume they'd finish this new assignment in time? So then, who even cares about the # of results in that 90 days, just look at the actual ghz-days they clocked and go on that? Plus the stuff I'd like to see, such as "no expirations and no bad results" in that same 90 day period. For DC work, it's 60 days and I think 35M exponents are taking 47 Ghz-days so we'd be looking for at least that much in 60 days for them to get DC cat 1. For cat 2, similar thing, just looking at their past XX days and if they've done at least YY ghz-days in that time, per worker. How is that sounding? Are we getting closer to something that people could be happy with? I figure this is probably the best idea so far because we're actually looking at past *real* performance and using that to estimate the likelihood they'd finish a cat 1 or 2 in the time allowed. |
[QUOTE=Madpoo;427673]My current line of thinking is to look at their (GHz-days / # of workers) over the past 90 days.[/QUOTE]
This all sounds good. But at that point, we should probably get rid of discrete coarse-grained "Cats" altogether and just give them an exponent that is far enough advanced that they'll complete them before it becomes a milestone-blocker. (This is easier said than done; I haven't fully thought out how to forecast when an exponent becomes a milestone blocker). |
This is going in the right direction !
The amount of work done in the past and the absence of expired assignments and bad results are indeed good metrics for what we want to achieve. I would prefer that those be measured over a longer period than 90 days : it would also remove the necessity of returning 2 results. Let us say the measure should at least be over 90 days for DC and 120 for LL. Jacob |
[QUOTE=Madpoo;427673]How is that sounding? Are we getting closer to something that people could be happy with? I figure this is probably the best idea so far because we're actually looking at past *real* performance and using that to estimate the likelihood they'd finish a cat 1 or 2 in the time allowed.[/QUOTE]
I think this is sounding really good. Real performance metrics as observed by the server are always going to be better than what the client reports. If you really wanted to go all out you could do what "axn" suggested, and try to calculate the trending of trailing edge completion, and assign candidates to machines such that they /should/ complete just and exacting in time. But... I would suggest that would be much more work and much more difficult to get right. Sticking with the current categories and associated expiry rules for each would probably be best (at least, for now). Also, perhaps set things up such that machines given Cat 1 are expected to complete within, say, 45 days rather than the 90 days allowed. Same thing with Cat 2; 75 days expected rather than the 150 days allowed. This way you would have fewer candidates chugging along at the trailing edges of the waves. |
[QUOTE=chalsall;427687]I think this is sounding really good. Real performance metrics as observed by the server are always going to be better than what the client reports.[/QUOTE]
In the back of my mind, I have an inkling of an idea that I could pass along a machine ID to a SQL function that churns and cogitates and spits out a minimum exponent size that this particular machine could reasonably accomplish in XX days time. Then it's just a matter of picking the smallest available assignment above that base value. Either that assignment part or the churn-and-cogitate could or should include a fudge factor since we're not dealing with an exact science, but you get the idea. In a sense that would micro-categorize and do away with the broad category 1-4 anyway. Well, maybe what we call cat 1 and 2 should still be reserved for the fastest and most reliable systems... those without expirations or bad results, but for the rest of the machines they just get what we think they can finish, and for new machines without a track record, they'd start out at what is now cat 4 but then after turning in some work they'd automatically start getting the smaller assignments. Hmm... well, food for thought there. The nice thing is, something like this *could* be inserted into the existing assignment code, where it would use this new metric as an additional input in the decision tree, perhaps just replacing that opt-in "get preferred assignments" flag for now. Baby steps. |
[QUOTE=Madpoo;427695]In the back of my mind, I have an inkling of an idea that I could pass along a machine ID to a SQL function that churns and cogitates and spits out a [color=red]minimum[/color] exponent size that this particular machine could reasonably accomplish in XX days time.
Then it's just a matter of picking [strike]the smallest[/strike] [u]an[/u] available assignment [color=red]above[/color] that base value. Either that assignment part or the churn-and-cogitate could or should include a fudge factor since we're not dealing with an exact science, but you get the idea.[/QUOTE]Erm, minimum = maximum and above = below, right? Or do I misunderstand something? |
[QUOTE=retina;427707]Erm, minimum = maximum and above = below, right? Or do I misunderstand something?[/QUOTE]
Well, in the back of my brain, I had in mind figuring out the smallest exponent a machine could do in 90 days and that's the "floor", then just see what the smallest *available* one above that is. I did a little thought experiment and saw that, for example, 63349229 would take ~ 149 GHz days, so a machine that did 150 GHz-days in the past 90 days would have been able to do that one. What I neglected to consider was that the next *available* exponent above that is in the 67M range and would take ~ 160 GHz-days, in which case this machine would no longer be the best match. I suppose what I really should have been thinking about was the smallest *available* exponent it could complete in 90 days to start with. But now that I've thought about this more, if I went by that, we'd have a bunch of machines getting exponents that could potentially take them the full 90 days with little margin for error. Thus my "fudge factor" to mix in there... so if a system cleared 200 GHz-days in the past 90 days, let the fudge factor adjust that up/down by 5-10% or something, just based on how things eventually work out. Okay, so I haven't really thought out the implementation *that* much... :smile: |
[QUOTE=Madpoo;427736]Well, in the back of my brain, I had in mind figuring out the smallest exponent a machine could do in 90 days and that's the "floor", then just see what the smallest *available* one above that is.[/QUOTE]Well the smallest exponent a machine could do would be 2. I still get the feeling you meant to say something like: the [i]largest[/i] exponent a machine could do within XX days and pick one [i]below[/i] that.
|
[QUOTE=Madpoo;427736]
I did a little thought experiment and saw that, for example, 63349229 would take ~ 149 GHz days, so a machine that did 150 GHz-days in the past 90 days would have been able to do that one. [/QUOTE] I think you are overthinking / fine-tuning this too much. I'd suggest something simple such as either 1) Any machine that has contributed more than X GHz-days in the last N days is upgraded to cat 2 assignments. or 2) Nightly sort cpus by GHz-days produced in the last N days and the top Y CPUs are automatically upgraded to category 2. The two are similar, but the advantage of the second system is it auto-adjusts over time. The rather minor downside to auto-cat-2 assignment upgrades is that a user will have only 150 days to complete an assignment where he may have expected 270 days. |
[QUOTE=Prime95;427741]The rather minor downside to auto-cat-2 assignment upgrades is that a user will have only 150 days to complete an assignment where he may have expected 270 days.[/QUOTE]
What if those who are "auto-upgraded" get the 270 day window "promised" by Primenet's current assignment rules for those who haven't clicked the obscure "opt-in" button? If Aaron gets the heuristics correct, almost all candidates which are assigned to "Awesome" machines which were auto-upgraded would complete well before the 270 day deadline. Let's be honest here: ~30 Cat 1 completions a day suggests strongly that something isn't optimal with the current opt-in system.... |
[QUOTE=Prime95;427741]The rather minor downside to auto-cat-2 assignment upgrades is that a user will have only 150 days to complete an assignment where he may have expected 270 days.[/QUOTE]
We just have to set the ratio X Ghz-days in N days higher than the current Cat 2 exponents Ghz-days / 150 days. |
How many primes had been Cat 1 assignments?
If none, then I do not want any. LOL. Yes I know, past performance does not guarantee future results. |
[QUOTE=TObject;427787]How many primes had been Cat 1 assignments?[/QUOTE]
Well, there has only been one Mersenne prime (M74207281) discovered since the category system was created, and it was Cat 4. So I would say the answer is none. At the time the category system was created, I believe M57885161 was in the Cat 2 range, but it was discovered prime about a year earlier, so it might have been Cat 3 or even Cat 4 if we were to try extrapolate what its category would have been if the category system was put in place earlier. I'll leave it to someone else to try and figure out if any of the other ones might have been in the top 3000/4000/5000 (whichever you want to choose as the Cat 1 limit) at the time those were assigned. |
cuBerBruce, awesome, thank you for that analysis. I want category 4 assignments only, then. LOL
I think that is what I get when I reserve anonymously. |
[QUOTE=cuBerBruce;427824]Well, there has only been one Mersenne prime (M74207281) discovered since the category system was created, and it was Cat 4. So I would say the answer is none.
At the time the category system was created, I believe M57885161 was in the Cat 2 range, but it was discovered prime about a year earlier, so it might have been Cat 3 or even Cat 4 if we were to try extrapolate what its category would have been if the category system was put in place earlier. I'll leave it to someone else to try and figure out if any of the other ones might have been in the top 3000/4000/5000 (whichever you want to choose as the Cat 1 limit) at the time those were assigned.[/QUOTE] One way of extrapolating this is to examine where the first-LL minimum would have been in relation to the prime discoveries. When M57885161 was discovered, the first-LL minimum was between 44 and 45 million. M57885161 would have been a Cat 4 assignment at that point. When M42643801 was discovered, the first-LL minimum was between 26 and 27 million. Cat 4 again. The "twins" of August and September 2008 - M37156667 and M43112609 - were discovered when the first-LL minimum was between 21 and 22 million. Cat 4 again. Every other prime from there back to M20996011 in November 2003 also looks as though it would have been far enough above the first-LL minimum to have been a Cat 4. M13466917 came when the first-LL minimum was between 8 and 9 million. Back in time this far, it is difficult to guess the actual number of first-time tests that would have been needed vs. factors found. Based on what we know today, there are ~108,000 unfactored candidates between ~8.5 million and 13,466,917. This puts M13466917 near the Cat 3/Cat 4 borderline. M6972593 was discovered when the first-LL minimum was between 3 and 4 million. This would have probably been a Cat 3 assignment. M3021377 was discovered when the first-LL minimum was between 1 and 2 million. Cat 3. M2976221 was also discovered when the first-LL minimum was between 1 and 2 million but we can probably conclude (its discovery being five months earlier than that of M3021377) that M2976221 came when the first-LL minimum was closer to 1 million than in the case of M3021377. I still doubt that this would have been within 10,000 exponents of the first-LL minimum, however, so I would also brand M2976221 a Cat 3. M1398269 was discovered when the first-LL minimum was still in six figures (indeed, everything below M756839 was not LLed at least once until January 15, 1997, two months after the discovery of M1398269). If we assume a roughly linear progression from M2 to M756839 during the first year of GIMPS, we peg the first-LL minimum right around 631,700. Today we have ~14,000 unfactored candidates between M631700 and M1398269. There would have been even more (but still <100,000) such candidates back in late 1996. Therefore, M1398269 would have been Cat 3. The moral of the story? Mersenne prime discoverers probably aren't milestone watchers, nor do they shy away from the higher exponents. |
[QUOTE=Prime95;427741]I think you are overthinking / fine-tuning this too much.
I'd suggest something simple such as either 1) Any machine that has contributed more than X GHz-days in the last N days is upgraded to cat 2 assignments. or 2) Nightly sort cpus by GHz-days produced in the last N days and the top Y CPUs are automatically upgraded to category 2. The two are similar, but the advantage of the second system is it auto-adjusts over time. The rather minor downside to auto-cat-2 assignment upgrades is that a user will have only 150 days to complete an assignment where he may have expected 270 days.[/QUOTE] What about using the existing "Top LL Producers" report to group users into percentiles based on work done (in GHz-days) in the last year, setting a minimum of say 1,000 GHz-days/year of LL testing for consideration? Anyone that is a new user or that has contributed less than 1,000 GHz-days of LL testing over the last year is assigned Cat 4 exponents (with an appropriate expiration date), while the Cat 1-3 exponents are assigned by percentile: the lowest [I]x[/I] percent of the Cat 1-3 exponents go to the top [I]x[/I] percent of contributors (based on the work done in the last year). The mechanism could work as follows: when a user requests an LL assignment, PrimeNet calculates the user's rank, finds an exponent at or near that user's percentile level, and assigns it. If the user's rank is too low, the user is new, or the user is anonymous, a Cat 4 exponent is automatically assigned. Such a scheme might even eliminate the need for expiration of exponents other than the Cat 4s. Right now, the 1,000 GHz-days/year cutoff for LL testers is at Rank # 1,239 who has completed five LL tests in the past year. This means that the worst case scenario would be a Cat 3 exponent requiring ~73 days for completion. The other end of the spectrum is curtisc who would likely turn in the plum Cat 1s within a day or two. One consideration that would have to be made is how the LL rankings might fluctuate over time. How often might it occur that today's Rank # 400 is next month's Rank # 2,000? I imagine that this is perhaps not so much a problem in the top 100-200 users; below that point, things could get a little more muddled. The other consideration is that this could be seen as punishing users wishing to remain anonymous. One approach to this could be to issue generic serial IDs (such as the S-series accounts in PrimeNet v4) to any user not choosing a user name upon sign-up, or simply adding a stipulation that users remaining completely anonymous may only receive Cat 4 assignments. (Considering that all of GIMPS' prime discoveries have been made on what would have been Cat 3 or Cat 4 assignments, I am not so sure that this is so onerous of a requirement!) |
[QUOTE=NBtarheel_33;427913]What about using the existing "Top LL Producers" report to group users into percentiles based on work done (in GHz-days) in the last year, setting a minimum of say 1,000 GHz-days/year of LL testing for consideration?[/QUOTE]
1000 GH-day/year can be achieved by 1 computer producing 1000 GH-day/year, or 1000 computers producing 1GH-day/year. Clearly, the latter cannot be given any Cat 1 assignments. Point being, it is the individual computer's performance (time to complete an exponent) that matters. Aggregate performance of a user is not relevant. |
[QUOTE=axn;427914]1000 GH-day/year can be achieved by 1 computer producing 1000 GH-day/year, or 1000 computers producing 1GH-day/year. Clearly, the latter cannot be given any Cat 1 assignments.
Point being, it is the individual computer's performance (time to complete an exponent) that matters. Aggregate performance of a user is not relevant.[/QUOTE] How would a user with 1000 computers each producing 1 GHz-day/year possibly complete a single LL test within a year, let alone get onto the LL leaderboard? |
[QUOTE=NBtarheel_33;427915]How would a user with 1000 computers each producing 1 GHz-day/year possibly complete a single LL test within a year, let alone get onto the LL leaderboard?[/QUOTE]
Whoosh (as the point misses you). Fine. One 4-core computer completing 4 LL-tests in a year (1LL test/core/year). Do you want milestones to be held up for a year? |
[QUOTE=axn;427916]Whoosh (as the point misses you).
Fine. One 4-core computer completing 4 LL-tests in a year (1LL test/core/year). Do you want milestones to be held up for a year?[/QUOTE] Minimum 1,000 GHz-days/year. 4 LL tests in a year would be a minimum of 250 GHz-days average for each one, i.e. tests up around 80 million. That won't endanger any milestones anytime soon. Not to mention that even if such a user were on the first-LL leaderboard and eligible to get a Cat 3 assignment, it is quite possible now for a Cat 3 to take a year or more to be processed. The present minimum completion time is 270 days, not including assignment churn. |
[QUOTE=NBtarheel_33;427917]Minimum 1,000 GHz-days/year. 4 LL tests in a year would be a minimum of 250 GHz-days average for each one, i.e. tests up around 80 million. That won't endanger any milestones anytime soon.
Not to mention that even if such a user were on the first-LL leaderboard and eligible to get a Cat 3 assignment, it is quite possible now for a Cat 3 to take a year or more to be processed. The present minimum completion time is 270 days, not including assignment churn.[/QUOTE] Sigh... Imagine the same user with 20 machines, which is 20,000 GHz-day/year. Not a single expo will be crunched in less than a year, but the user will be in top 100, eligible for the Cat 1, by your proposal (whatever cutoff you propose, just imagine that many more computers to make them qualify for Cat 1). Now what? EDIT:- Or consider this. The user has a combination of fast and slow computers. User is eligible for cat 1, but the slow computer can't process them fast enough. How will your scheme work, without accounting for individual computer productivity? |
NBtarheel_33, great, thank you.
Yep, I'll hang in the last boarding zone. Women and children can have all the Cat 1 assignments. |
[QUOTE=axn;427921]Sigh...
Imagine the same user with 20 machines, which is 20,000 GHz-day/year. Not a single expo will be crunched in less than a year, but the user will be in top 100, eligible for the Cat 1, by your proposal (whatever cutoff you propose, just imagine that many more computers to make them qualify for Cat 1). Now what? EDIT:- Or consider this. The user has a combination of fast and slow computers. User is eligible for cat 1, but the slow computer can't process them fast enough. How will your scheme work, without accounting for individual computer productivity?[/QUOTE] What if we set a minimum for GHz-days production *and* number of completed LLs? |
[QUOTE=axn;427921]Or consider this. The user has a combination of fast and slow computers. User is eligible for cat 1, but the slow computer can't process them fast enough. How will your scheme work, without accounting for individual computer productivity?[/QUOTE]
For what it's worth, the current setup and anything else going forward already considers per-CPU performance, not just the user account as a whole. That's also one reason why manual assignments aren't eligible for cat 1 or 2, because there's no way to ascertain which machine will be doing the work and how reliable it is. |
[QUOTE=NBtarheel_33;427972]What if we set a minimum for GHz-days production *and* number of completed LLs?[/QUOTE]
Maybe, but consider a hypothetical where some bloke just finished a 100M digit exponent that took a year+ to run. After all that time, he's thinking "whew, maybe I'll do some small cat 1 stuff for a change". Their ghz-days will be pretty high after turning in a big assignment like that, but it will only show as 1 result in the past XX days. Yet I'd argue this person has shown a certain commitment to follow through by sticking with a large test for so long. Anyway, just something to keep in mind... which is why I'm not totally married to the idea of "X results in the past Y days", but more about their actual throughput. Someone could churn through maybe 4 smaller double-checks in the past 90 days on a slower system, but if they wanted to do a first-time LL test they could maybe only muster a couple in that same time. Especially if they want to have 4 running at once... with small DC assignments that may have been fine, but with larger FFT first time checks, their system will suffer greatly and probably won't finish anything for over a year. It happens. :smile: |
1 Attachment(s)
This is based upon the rate of change of the P90 years on the classic status page:
We are now in new territory. Never before[SUP]*[/SUP] has the estimated date of completion been so soon. The current outlook is for all the first time LL's below 79.3 million to be done by October 9th of 2017. Using the same measuring scheme, in late 2005, the estimated completion date was end of August 2039. It had been Nov 2050 in early 2003. (*There was a period in 1999-2000, when there was so much of the early factoring work being done that the numbers were skewed. About 200,000 exponents were being factored out each month, from October 1999 through March 2000. By Sept 2000 the estimate was 2023 and climbing.) Further if some of the recent trends keep up, we may see the completion (except stragglers) by June 2017. There has been a huge up tick in completed DC's since the announcement of M49. If a reasonable number of theses CPU's stick around and get promoted to first time LL's that would speed things along. The dark blue line on the graph represents a rough moving average. The green lines are drawn in along recent peaks and bottoms. The other 3 lines are generated trend lines with projections. |
[QUOTE=Uncwilly;428251]... the classic status page:[/QUOTE]Only one exponent remaining until the 1792K FFT is finished. Assuming the residue matches then that range is finished as soon as vats09 checks it in. [size=1]It is already one day overdue.[/size]
|
[QUOTE=retina;428258]Only one exponent remaining until the 1792K FFT is finished.[/QUOTE]
Well, for previous FFT sizes: The 20.4M range went to zero by 2007-12-31 Then 25.35M was zero by 2013-01-07 Then 30.15M went to zero by 2014-02-16 |
We are presently 4,604 factored exponents away from reaching 3 million factored exponents below 79.3M.
|
[QUOTE=retina;428258]Only [b][u][i][color=red]n[/color][/i][/u][/b]one exponent[b][u][i][color=red]s[/color][/i][/u][/b] remaining until the 1792K FFT is finished.[/QUOTE]And now done.
Time to merge in another line: [url]http://www.mersenne.org/report_classic/[/url] |
[QUOTE=retina;428320]
Time to merge in another line: [url]http://www.mersenne.org/report_classic/[/url][/QUOTE] Done. |
[QUOTE=petrw1;423764]not ....299[/QUOTE]
By the way petrw1, you have an exponent that is likely to expire probably mere hours before you finish: [URL="http://www.mersenne.org/assignments/?exp_lo=35237537"]http://www.mersenne.org/assignments/?exp_lo=35237537[/URL] My reckoning shows you have 2 days to go, but it expires in 1 day. I'm only granular to the day, not the hour, and I don't know what time the recycling runs, but just FYI. EDIT: By the way, this is a perfect case in point of someone getting a cat 1 assignment and taking right up to 60 days to complete (and it may not even finish in time). I don't mean to single you out, I'm sure there was a reason for it in this case... :smile: EDIT #2: Oh, and also this: [URL="http://www.mersenne.org/assignments/?exp_lo=35247787"]http://www.mersenne.org/assignments/?exp_lo=35247787[/URL] -- I'm showing you with a real ETA of 4.9 days and expires in 3 days. My estimates of *actual* progress may be skewed if these recently kicked into high gear in the last couple days. And if it makes you feel any better, here's an example of a first time check that will expire in 3 days but I estimate it has 6.5 days left to run: [URL="http://www.mersenne.org/assignments/?exp_lo=64530721"]http://www.mersenne.org/assignments/?exp_lo=64530721[/URL] -- so we have someone who is going to take past 90 days even though they got it as a cat 1 assignment. Right now between DC and LL, there are 63 cat 1 assignments where I'm predicting the assignment will expire before it's *actually* expected to finish (using my running analysis of check-in progress). Some are moving so slowly like M35561203 at 0.15% per day that it'll take 420 days to finish, but expires in another 49. I don't know what to say... it only move the bar 0.6% in 4 days... |
[QUOTE=Madpoo;428530]By the way petrw1, you have an exponent that is likely to expire probably mere hours before you finish:
[URL="http://www.mersenne.org/assignments/?exp_lo=35237537"]http://www.mersenne.org/assignments/?exp_lo=35237537[/URL] My reckoning shows you have 2 days to go, but it expires in 1 day. I'm only granular to the day, not the hour, and I don't know what time the recycling runs, but just FYI. EDIT #2: Oh, and also this: [URL="http://www.mersenne.org/assignments/?exp_lo=35247787"]http://www.mersenne.org/assignments/?exp_lo=35247787[/URL] -- I'm showing you with a real ETA of 4.9 days and expires in 3 days. ..[/QUOTE] Guilty ... but I blame the Vegemite (or any other excuses I can think of related to us being in Australia for 5 weeks) ... and I also blame the company that this borged computer lives in for doing such frequent updates/patches that require a reboot ... this PC went off line only a few days after we left ... and I also blame (in a vague and obscure way for EDIT #2) the way estimated completion dates are computed when, in this case 2 cores are doing TF (and doing it fast) and 2 cores are doing DC (and certainly NOT fast). It seems the rolling average calculation for that CPU is saying: - TF completed fast: increase it - DC took much longer; reduce it. Anyway, to make a long store short the Rolling average is WAY FAST for this PC. That DC will take almost 6 more days. I don't have easy access to this PC so if they expire will the next checkin punt them? Or will they finish as a N/A DC? |
[QUOTE=petrw1;428535]Anyway, to make a long store short the Rolling average is WAY FAST for this PC. That DC will take almost 6 more days.
I don't have easy access to this PC so if they expire will the next checkin punt them? Or will they finish as a N/A DC?[/QUOTE] Well, last time something like this happened, I created an assignment for myself and just sat on it to allow the original user to finish it without the hassle of it being reassigned to someone, blah blah. However I recall someone took umbrage at that... don't know why, I didn't poach it, it was more like an administrative extension to the original user. If I don't do that, someone else will get the assignment when yours expire, but yours will finish just days later and the person who got the new assignment won't figure that out until they turn theirs in as an unnecessary triple-check. I'm inclined in this case to do the "administrative extension" since I know you really are working the assignments and it'll only miss by a matter of days... doing the extension means we avoid someone else wasting their time on them. Any input from the peanut gallery? |
[QUOTE=Madpoo;428536]Any input from the peanut gallery?[/QUOTE]
Do it. |
[QUOTE=chalsall;428540]Do it.[/QUOTE]
Well if you say so, then okay. LOL :smile: Okay, I created "shadow" assignments to myself for those... Now, to be clear, I'm NOT going to work on these, I'm just doing that so they don't get reassigned to anyone else and petrw can have these extra couple of days to finish them. I think last time I did this same thing, I had also done one for an exponent that was abandoned and I pre-reserved it for myself, and I think that's what someone took issue with, like I'm hogging all the good stuff...but this ain't that. :smile: |
[QUOTE=Madpoo;428543]Now, to be clear, I'm NOT going to work on these, I'm just doing that so they don't get reassigned to anyone else and petrw can have these extra couple of days to finish them.[/QUOTE]
At the end of the day, it doesn't really matter that much. But a recycled DC _does_ mean wasted cycles. Perhaps when petrw1 is next in front of that borged machine he will configure it such that it doesn't get Cat 1 assignments.... |
[QUOTE=chalsall;428544]At the end of the day, it doesn't really matter that much. But a recycled DC _does_ mean wasted cycles.
Perhaps when petrw1 is next in front of that borged machine he will configure it such that it doesn't get Cat 1 assignments....[/QUOTE] Or tell his wife we can no longer take long winter vacations. Anyone want to place odds :razz: |
[QUOTE=chalsall;428544]Perhaps when petrw1 is next in front of that borged machine he will configure it such that it doesn't get Cat 1 assignments....[/QUOTE]
The only computer-specific configuration that can be done is to set "days of work to queue" > 10. Otherwise it is an account-wide setting. Maybe he can set it to use two cores to crunch a single expo so that things complete with lot of safety margin. |
Another example of why we should do more checking on the types of machines that get cat 1 work...
[URL="http://www.mersenne.org/assignments/?exp_lo=63350927"]http://www.mersenne.org/assignments/?exp_lo=63350927[/URL] Poor M63350927 has a fun history... First assigned back in January 2014 under the grandfathered rules... it kept plugging along until the exponent eventually became cat 1 at which point it's lack of progress means even with the grace for work completed, it expired. Now it's cat 1 and was assigned to someone else in Nov 2015. Well, after some spotty progress, it stopped checking in just 3 weeks later, stopping at 60% done. So, once again, on Feb 14, 2016, it's assigned *again* as cat 1 to the current owner. Well, here we are after just shy of 4 weeks and it's at 18.8% done. My analysis of progress per day tells me it will finish in 93.7 more days... well heck, that alone is longer than the 90 days it was supposed to finish in. Anyway, it will expire in 65 days and then get assigned to a *3rd* person as category 1 (4th assignment overall). Here's a funny thing... that grandfathered assignment that got it over 2 years ago? It's still reporting in... I mean, it's only 30.7% done after 2 years and 2 months, but hey... it reported in today in fact, and may just finish up in another 4-5 years. Meanwhile, it would get reassigned to someone else who may just finish it before it expires, fingers crossed. By then, the current assignment will be a double-check and when it finally checks in about 30 days after it was expired, maybe it beats the new assignee, maybe not... And then 4+ years later that original assignment finishes as an unneeded triple-check... boy, wasn't that a waste of 6+ years for that system. And that's the sad story of poor M63350927. As many as 11 months from becoming cat 1 to when it might finally get checked in: It originally became cat 1 in Sep 2015, slugged through a few more months as the grandfather rule worked it's way through, will expire in 2 more months, and could take up to 3 more months after that (if it's not expired again, starting over with 3 more months). |
[QUOTE=Madpoo;428530]By the way petrw1, you have an exponent that is likely to expire probably mere hours before you finish:
[URL="http://www.mersenne.org/assignments/?exp_lo=35237537"]http://www.mersenne.org/assignments/?exp_lo=35237537[/URL] [/QUOTE] DONE!!!!!!!!!! [QUOTE] EDIT #2: Oh, and also this: [URL="http://www.mersenne.org/assignments/?exp_lo=35247787"]http://www.mersenne.org/assignments/?exp_lo=35247787[/URL] -- I'm showing you with a real ETA of 4.9 days and expires in 3 days.[/QUOTE] 3 or 4 days to done. 2 days to "officially" expire. |
[QUOTE=petrw1;428662]DONE!!!!!!!!!![/QUOTE]
Ah man, I got poached! :smile: [URL="http://www.mersenne.org/M35237537"]M35237537[/URL] |
[QUOTE=Madpoo;428743]Ah man, I got poached! :smile:
[URL="http://www.mersenne.org/M35237537"]M35237537[/URL][/QUOTE] Was it at 99%, or at "not started yet"? :razz: Not that would make big difference fr you, with those rigs able to do a DC in few hours, is not like a half year drawback... :wink: |
[QUOTE=LaurV;428745]Was it at 99%, or at "not started yet"? :razz:
Not that would make big difference fr you, with those rigs able to do a DC in few hours, is not like a half year drawback... :wink:[/QUOTE] He's just kidding. Look from post #2360 onwards. |
:davieddy:
(don't we have the :laurv: with face palm? hehe) |
The next 4 milestone countdowns are all primes!
Countdown to first time checking all exponents below 64M: 3 (Estimated completion : 2016-05-28) Countdown to first time checking all exponents below 65M: 5 (Estimated completion : 2016-05-28) Countdown to first time checking all exponents below 66M: 7 (Estimated completion : 2016-05-28) Countdown to first time checking all exponents below 67M: 53 (Estimated completion : 2016-05-28) |
[QUOTE=petrw1;428662]DONE!!!!!!!!!!
3 or 4 days to done. 2 days to "officially" expire.[/QUOTE] So this doesn't happen again (though I appreciate the support) ... and since the owner of this PC will be off work shortly for an extended period I have switched it from DC to ECM....after the last/current assignment completes for each core. |
[QUOTE=Madpoo;428576]...
So, once again, on Feb 14, 2016, it's assigned *again* as cat 1 to the current owner. Well, here we are after just shy of 4 weeks and it's at 18.8% done. My analysis of progress per day tells me it will finish in 93.7 more days... well heck, that alone is longer than the 90 days it was supposed to finish in. Anyway, it will expire in 65 days and then get assigned to a *3rd* person as category 1 (4th assignment overall). [/QUOTE] An update on M63350927 ... it has 61 days 'til expiration, but it seems to have picked up the progress a bit and will now take an estimated 75.3 days. Still [B]after[/B] it expires, but if it keeps up the pace, it could actually finish before expiration. I guess we'll see. EDIT: This seems like a good opportunity to roll out the new/improved exponent detail page: [URL="http://www.mersenne.org/M63350927"]M63350927[/URL] Now you can see the previous assignment details...how far they got, whether or not they're still updating, etc. It was actually George's idea to include the % done info on that page so you didn't have to click over to the assignment page to get that. It's handy for sure. |
[QUOTE=Madpoo;429101]EDIT: This seems like a good opportunity to roll out the new/improved exponent detail page:
[/QUOTE] :tu: Very welcomed! |
I would argue that the "% Done" column title should merely read "Progress".
|
The first 4 milestones that await achievement form a palindrome.
Countdown to first time checking all exponents below 64M: 3 Countdown to first time checking all exponents below 65M: 4 Countdown to first time checking all exponents below 66M: 5 Countdown to first time checking all exponents below 67M: 43 [SIZE="3"][B]3 [COLOR="Red"]4 [COLOR="Green"]5[/COLOR] 4[/COLOR]3[/B][/SIZE] |
And done....
[QUOTE=Madpoo;428530]By the way petrw1, you have an exponent that is likely to expire probably mere hours before you finish:
EDIT #2: Oh, and also this: [URL="http://www.mersenne.org/assignments/?exp_lo=35247787"]http://www.mersenne.org/assignments/?exp_lo=35247787[/URL] -- I'm showing you with a real ETA of 4.9 days and expires in 3 days. [/QUOTE] Thanks....CPU has been repurposed to non-Milestone blocking work. |
[QUOTE=Uncwilly;429466]The first 4 milestones that await achievement form a palindrome.
Countdown to first time checking all exponents below 64M: 3 Countdown to first time checking all exponents below 65M: 4 Countdown to first time checking all exponents below 66M: 5 Countdown to first time checking all exponents below 67M: 43 [SIZE="3"][B]3 [COLOR="Red"]4 [COLOR="Green"]5[/COLOR] 4[/COLOR]3[/B][/SIZE][/QUOTE] Also prime, fwiw. |
Another Palindrome
All exponents below 35,190,131 have been tested and double-checked. All exponents below 63,350,927 have been tested at least once. Countdown to first time checking all exponents below 64M: 2 (Estimated completion : 2016-05-21) Countdown to first time checking all exponents below 65M: 3 (Estimated completion : 2016-05-21) Countdown to first time checking all exponents below 66M: 4 (Estimated completion : 2016-05-21) Countdown to first time checking all exponents below 67M: 32 (Estimated completion : 2016-05-21) [SIZE="4"] [COLOR="olive"]2[/COLOR] [COLOR="Blue"]3[/COLOR] [COLOR="Red"]4[/COLOR] [COLOR="blue"]3[/COLOR][COLOR="Olive"]2[/COLOR][/SIZE] |
[QUOTE=Uncwilly;428251]This is based upon the rate of change of the P90 years on the classic status page:
We are now in new territory. Never before[SUP]*[/SUP] has the estimated date of completion been so soon. The current outlook is for all the first time LL's below 79.3 million to be done by October 9th of 2017.[/QUOTE]We are now under a year and a half from a projected completion. Current projections center around 2017-09-20. |
I am curious to know why user "tchiwam" was allocated 150 days to complete the LL test when everyone else around there was allocated 90 days. :confused:
[url]http://www.mersenne.org/assignments/?exp_lo=63000000&exp_hi=67000000&execm=1&exp1=1&extf=1&exdchk=1[/url] The exponent: [url]http://www.mersenne.org/M66623723[/url] |
A bug in the SQL view that calculates categories -- introduced when cat 0 was added.
|
[QUOTE=Prime95;431009]A bug in the SQL view that calculates categories -- introduced when cat 0 was added.[/QUOTE]
Oh, and that reminds me, I need to update my function that calculates the days-to-expire so it shows up correctly on my test web pages. It's probably showing expirations by the old rules. |
[QUOTE=Madpoo;431011]Oh, and that reminds me, I need to update my function that calculates the days-to-expire so it shows up correctly on my test web pages. It's probably showing expirations by the old rules.[/QUOTE]
Are exponents assigned before the rule changes expired according to the old rules? |
[QUOTE=cuBerBruce;431029]Are exponents assigned before the rule changes expired according to the old rules?[/QUOTE]
Yes |
[QUOTE=rudy235;429862]All exponents below 35,190,131 have been tested and double-checked.
All exponents below 63,350,927 have been tested at least once. Countdown to first time checking all exponents below 64M: 2 (Estimated completion : 2016-05-21) Countdown to first time checking all exponents below 65M: 3 (Estimated completion : 2016-05-21) Countdown to first time checking all exponents below 66M: 4 (Estimated completion : 2016-05-21) Countdown to first time checking all exponents below 67M: 32 (Estimated completion : 2016-05-21)[/QUOTE] All exponents below 35,518,181 have been tested and double-checked. All exponents below 63,350,927 have been tested at least once. Countdown to first time checking all exponents below 64M: 1 (Estimated completion : 2016-05-21) Countdown to first time checking all exponents below 65M: 1 (Estimated completion : 2016-05-21) Countdown to first time checking all exponents below 66M: 1 (Estimated completion : 2016-05-21) Countdown to first time checking all exponents below 67M: 17 (Estimated completion : 2016-06-07) Estimated time until this gets poached???? I am against such activity. I suspect it will occur in less than 2 weeks. |
I think it will be poached before the 20th.
|
Seems that exponent 63,350,927 is stuck at 54.1% for a long time.
|
[QUOTE=rudy235;431369]Seems that exponent 63,350,927 is stuck at 54.1% for a long time.[/QUOTE]
I've been watching it to see if it will finish before expiring. Up until a week ago, I estimated it was going to be a real close shave and might actually finish a day or so before expiration. If that were the case, I would have done something to extend it and let it finish before expiring, just so it wouldn't be reassigned only to have this original test check in a day later. However, it was stuck, as you noticed, for about a week and then finally checked in again today: 8 days later and only 0.2 % progress. I'm now calculating it to finish in 45 days, well past the 31 days it has left. So... that sucks. The user must have had some issue or their computer was shut off for a week. Unless it picked up the pace drastically or got moved to a faster machine, it will expire, based on it's typical daily progress of roughly 1% daily. In hindsight, this system should have never been assigned this exponent, but that's neither here nor there and has been addressed already, but now what? *Should* it be "poached", knowing it is unlikely to finish in time? On the "NO" side, we'd have to speculate that it could be reassigned to another CPU just like this one, taking near (or over) the max amount of days to finish and maybe expire. Of course if that happened, this current assignment will probably limp past the finish line 2 weeks after it was reassigned. On the "YES" side, the changes in priority assignment rules that George implemented stand a good chance of making sure the new assignment is done swiftly, preferably *before* the current assignment checks in. Normally if I'm even considering poaching something at all, I'm going to look at whether the assignment was abandoned... no check-in/progress in several weeks AND it's close to expiring anyway. This is just one of those cases where the user actually is doing stuff, but way slower than anticipated, so I have mixed feelings. :smile: What does the peanut gallery think? Let it be and let nature take it's course? Or wait for someone else to "do the dirty" and poach it so that we can publicly curse them but privately thank them? LOL |
[QUOTE=Madpoo;431472]What does the peanut gallery think? Let it be and let nature take it's course? Or wait for someone else to "do the dirty" and poach it so that we can publicly curse them but privately thank them? LOL[/QUOTE]First time checks are no problem with another user doing it also. But two things should be followed: 1) No more than two people at once, so coordinate. 2) Don't expire the exponent and have the system tell the user the work is now not needed. Still accept the work when it finally finishes.
And of course DC is a different matter. Don't waste cycles doing a second DC when someone is already actively working on it. |
[QUOTE=Madpoo;431472]What does the peanut gallery think? Let it be and let nature take it's course? Or wait for someone else to "do the dirty" and poach it so that we can publicly curse them but privately thank them? LOL[/QUOTE]
Why don't YOU poach it yourself immediately? My thinking is that you have the firepower to finish this in a couple of days or so, and it is currently holding up _three_ milestones (not that I personally care). This has the advantage that no one else will be tempted (thus wasting resources), and you have the admin rights to grant Summy the "First LL Credit" in the exceptionally unlikely case where he actually finishes in time. I'm really glad George implemented the most recently revised rules. Should eliminate almost all cases like this going forward. (As an aside, it is amusing to note that Summy has seven other low assignments, none of which are projected to complete before expiring. He never should have been given Cat 1s in the first place.) |
[QUOTE=chalsall;431477]Why don't YOU poach it yourself immediately?[/QUOTE]
Oh, I'm tempted for sure, but I wanted to see what opinions others had. Plus, Summy still has 31 days before it expires, and who knows, maybe he/she will pull a rabbit out of the hat and hit the hidden "turbo" button. |
[QUOTE=Madpoo;431490]Plus, Summy still has 31 days before it expires, and who knows, maybe he/she will pull a rabbit out of the hat and hit the hidden "turbo" button.[/QUOTE]
Like what? Perhaps not running eight (8#) LL tests on a four core (4#) CPU? Just poach it, before someone else does. Then it becomes a legitimate DC, and no throughput is lost. |
Trust the system
I would say [B]NO[/B] to poaching.
George created the Cat 0 for a reason. Let it actually work instead of thinking you know better. How can you tell if the new Cat 0 is going to work if you circumvent it. |
Are we really that impatient?
The assignment will be completed sooner or later. Is the difference of a few days that critical? |
[QUOTE=gjmccrac;431506]How can you tell if the new Cat 0 is going to work if you circumvent it.[/QUOTE]
Who hath drawn the circuits for the lion? |
[QUOTE=ixfd64;431507]Are we really that impatient?[/QUOTE]
That may have been rhetorical, but the answer is "YES"! :smile: Well, some of us anyway. I've had to sit on my hands to keep from running this one. Truth be told though, there's another one that I *did* start a test on earlier today: [URL="http://www.mersenne.org/assignments/?exp_lo=66957773"]M66957773[/URL] It expires in a day, the assignee hasn't updated progress for 10 days anyway, and even then it wouldn't have been done for another 23 days (using my own analysis of the actual progress). In a case like that I don't feel bad about checking in a result just before it expires and gets reassigned, so it isn't assigned to someone else and takes a while. Plus it's not "the last one in a range" so it's probably not as big a deal anyway. Now I just hope my test actually finishes in time... It was kind of a spur of the moment thing and it'll take me another 30 hours... if it expires and reassigned before then, well, I guess I'll just save it and check it in later so the new assignee isn't getting poached. I should have started it a little sooner, I'm thinking, but I think I might have just shy of 2 days to get it done anyway. |
[QUOTE=Madpoo;431524]Now I just hope my test actually finishes in time... It was kind of a spur of the moment thing and it'll take me another 30 hours... if it expires and reassigned before then, well, I guess I'll just save it and check it in later so the new assignee isn't getting poached. I should have started it a little sooner, I'm thinking, but I think I might have just shy of 2 days to get it done anyway.[/QUOTE]
For the love of Mersenne, please create an assignment for yourself before it gets reassigned as cat 0. In the future, I would suggest that you let these become cat 0, otherwise what's the point in having it. |
[QUOTE=Madpoo;431524]Now I just hope my test actually finishes in time... It was kind of a spur of the moment thing and it'll take me another 30 hours... if it expires and reassigned before then, well, I guess I'll just save it and check it in later so the new assignee isn't getting poached. I should have started it a little sooner, I'm thinking, but I think I might have just shy of 2 days to get it done anyway.[/QUOTE]
It will expire at the latest tonight at midnight UTC which is 22 hours after your post saying you needed 30 hours. |
[QUOTE=chalsall;431477]
My thinking is that you have the firepower to finish this in a couple of days or so, and it is currently holding up _three_ milestones (not that I personally care). [/QUOTE] I actually think there is a fourth milestone secretly lurking in the midst. [B] All Mersenne numbers up to 20 million digits tested at least one. All exponents ≤ 66,438,571 tested[/B] |
[QUOTE=ATH;431534]It will expire at the latest tonight at midnight UTC which is 22 hours after your post saying you needed 30 hours.[/QUOTE]
It had longer before expiring, and my result just got turned in. I think (I haven't checked) the expiration I have showing up on the assignment page will count down to zero before it expires. Well, that is true (that's what it was showing when I just turned mine in), but I think the reason is because the expiration task runs just before midnight UTC each day. I have a feeling if it ran after midnight, the date calculation would have tossed it into the expired category. Well, I won't argue with success in this case. :smile: And yeah, I guess I coulda shoulda let it expire and be reassigned to see how the new rules handled it. I didn't think about that until I'd already started my test and got 20% or so in, so by then I was committed. |
[QUOTE=rudy235;431582]I actually think there is a fourth milestone secretly lurking in the midst.
[B] All Mersenne numbers up to 20 million digits tested at least one. All exponents ≤ 66,438,571 tested[/B][/QUOTE] I guess that would indeed be a milestone of sort. I'm not sure if that's something worth memorializing on the milestone page... what do others thing? A year ago, we noted when all 10M digit Mersenne numbers had been double-checked (looks like we didn't note when they'd all been tested at least once). Is that something worth doing for every multiple of 10M digit things as another way to tick off some progress? Thoughts? |
[QUOTE=Madpoo;431601]Is that something worth doing for every multiple of 10M digit things as another way to tick off some progress? Thoughts?[/QUOTE]
These are rare enough that it is probably worth it to have it. |
[QUOTE=Madpoo;431601]A year ago, we noted when all 10M digit Mersenne numbers had been double-checked (looks like we didn't note when they'd all been tested at least once).[/QUOTE]
It was on Dec 25th 2010: 2010-12-25 All exponents below M(37156667) tested at least once. 2010-12-25 All exponents below M(32582657) tested at least once. |
If we consider the first Titanic Prime 2 [SUP]4253[/SUP]-1 as a good date to start measuring, we have that getting to the first 10 million digit number milestone took from November 3rd 1961 to December 25th 2010, 49+ years. Going from 10 million to 20 million will probably take about 5 years and 5 to 6 months.
|
[QUOTE=ATH;431626]It was on Dec 25th 2010:
2010-12-25 All exponents below M(37156667) tested at least once. 2010-12-25 All exponents below M(32582657) tested at least once.[/QUOTE] I see what you're saying... the first 10M digit exponent (33219281) is between those two Mersenne primes, and since both of those had their "tested at least once" on the same day, that would necessarily include <33219281 as well. Was it really the case where the milestones for 32M, 33M, 34M, 35M and 36M were all held up by some single test that *finally* finished? How horrid. :smile: The separate milestone dates for those 5 things are indeed the same date... geez. |
Yeah, it is not so strange really. The 10M digit prize and the "10M digit LL" option in Prime95 had cleared 33219281+ for many years while the sub 10M digit area was not finished, so this is just the point when the sub 10M digit finished and catched up to the 10M+ digit wavefront.
I'm not sure exactly when 10M digit prize and option started but it was already going when I joined in 2003, and back then the real wavefront was at 18M-20M (I got an 18.8M test in Oct 2003). I have the milestone logs from 2010 and I'm pretty sure [URL="http://mersenne.org/M31494937"]http://mersenne.org/M31494937[/URL] was the culprit: 2010-12-25 5am UTC: All exponents below 31,494,937 have been tested at least once. 2010-12-25 7am UTC: All exponents below 37,591,483 have been tested at least once. I'm missing the one from 6am UTC unfortunately, but it is very unlikely that another exponent above 31494937 finished in those 2 hours. |
Another Palindromic.
All exponents below 35,583,869 have been tested and double-checked. All exponents below 63,350,927 have been tested at least once. Countdown to first time checking all exponents below 64M: [B]1[/B] (Estimated completion : 2016-06-04) Countdown to first time checking all exponents below 65M: [B]1[/B] (Estimated completion : 2016-06-04) Countdown to first time checking all exponents below 66M: [COLOR="Red"][B]1[/B][/COLOR] (Estimated completion : 2016-06-04) Countdown to first time checking all exponents below 67M: [B]11[/B] (Estimated completion : 2016-06-07) |
[QUOTE=rudy235;431734]Countdown to first time checking all exponents below 67M: [B]11[/B] (Estimated completion : 2016-06-07)[/QUOTE]
Someone with an assignment 96.3% done when it was expired finally [url=http://www.mersenne.org/report_exponent/?exp_lo=66706501&exp_hi=&full=1]completed it[/url] to bring this countdown value to 11. BTW, an anonymous user has grabbed a [url=http://www.mersenne.org/report_exponent/?exp_lo=67386427&exp_hi=&full=1]cat 0 exponent[/url] that has been available for over two days. |
[QUOTE=cuBerBruce;431741]\
BTW, an anonymous user has grabbed a [url=http://www.mersenne.org/report_exponent/?exp_lo=67386427&exp_hi=&full=1]cat 0 exponent[/url] that has been available for over two days.[/QUOTE] Is that bad? |
[QUOTE=rudy235;431744]Is that bad?[/QUOTE]
Well, given that it's generally desired to have cat 0 exponents be started and completed quickly, it's a bit unfortunate to have it sit for over two and a half days before it even gets assigned. But then, this is not a particular low cat 0, so I don't think it's much to be concerned about (at least as far as this particular exponent is concerned). The fact that it was an anonymous user might also raise a little concern whether it will get done promptly, but supposedly that user has shown good throughput in order to get the assignment in the first place. |
[QUOTE=cuBerBruce;431741]
BTW, an anonymous user has grabbed a [url=http://www.mersenne.org/report_exponent/?exp_lo=67386427&exp_hi=&full=1]cat 0 exponent[/url] that has been available for over two days.[/QUOTE] Anonymous users are allowed access to cat 0 assignments as long as their computer is producing results at the proscribed rate. In this case, the computer is producing over 22 GHz-days per day per worker. Looking into this I did find and fix a bug. Cat 0 M67122481 was assigned in error. I hope this doesn't cause a milestone blockage. |
[QUOTE=cuBerBruce;431741]Someone with an assignment 96.3% done when it was expired finally [url=http://www.mersenne.org/report_exponent/?exp_lo=66706501&exp_hi=&full=1]completed it[/url] to bring this countdown value to 11.
BTW, an anonymous user has grabbed a [url=http://www.mersenne.org/report_exponent/?exp_lo=67386427&exp_hi=&full=1]cat 0 exponent[/url] that has been available for over two days.[/QUOTE] It's not really an anonymous user, just someone who didn't list a "public name" in their account profile. I've worked up a test page that actually makes that distinction... you can see here what it does in those cases: [URL="http://www.mersenne.org/report_exponent/default.mock.php?exp_lo=67386427&exp_hi=&full=1"]M67386427 Test Report[/URL] I can't remember if I had other changes on that page that I was working on, but that's the general idea. |
I think "Anonymous" would be more accurate. As for users without accounts, they should be marked as "Unregistered."
|
I think the next candidate coming up for LL expiration will be this:
[URL="http://www.mersenne.org/assignments/?exp_lo=67084891"]http://www.mersenne.org/assignments/?exp_lo=67084891[/URL] If my guess is right, it'll expire around 11:30 PM (UTC) on Wed the 20th (around when the nightly task runs). Since it's in the 67M range where there are still lots of unassigned exponents, there's little danger of it being poached, but on the other hand it's #115 or so in the pecking order, so it should still qualify as cat 0. I'm curious to see what kind of user/machine gets the assignment under the new rules, so it should be a good real world example. The next ones after that are likely to be these, with about a week (+/1 a day) 'til expiration: 67124933 67125973 67130443 67125973 in particular will be interesting... the current assignee is still checking in, but I'm showing they'll finish in 25 days, even though it'll expire in 6. Oh the drama... will the new assignee finish it before the expired one? Stay tuned! LOL The other 2 have checked in somewhat recently, but not in the past few days, so I don't know if they wandered off or what, but they could still be working on them. If 67130443 keeps going at the rate they were, they'd finish in 21 days but expire in 8. 67124933 would finish in ~ 24 days but expire in 6. |
[QUOTE=Prime95;431755]Looking into this I did find and fix a bug. Cat 0 M67122481 was assigned in error. I hope this doesn't cause a milestone blockage.[/QUOTE]
Hmm... in the 4 days between assignment and first check-in, it moved the bar a whopping 0.9%. Let's hope 3.5 of those 4 days were spent finishing up another exponent in the queue and we just caught the beginning of it... otherwise, it'll run out the clock and expire before finishing. |
[QUOTE=Madpoo;431965]Hmm... in the 4 days between assignment and first check-in, it moved the bar a whopping 0.9%. Let's hope 3.5 of those 4 days were spent finishing up another exponent in the queue and we just caught the beginning of it... otherwise, it'll run out the clock and expire before finishing.[/QUOTE]Would it/they have not checked-in said other assignment and that would show in the results/logs? :confused:
|
[QUOTE=Madpoo;431964]
Since it's in the 67M range where there are still lots of unassigned exponents, there's little danger of it being poached, but on the other hand it's #115 or so in the pecking order, so it should still qualify as cat 0. I'm curious to see what kind of user/machine gets the assignment under the new rules, so it should be a good real world example..[/QUOTE] You can monitor the 10 cat 0 assignments: [CODE] expo assigned date % complete GHz-days/day/worker 67086007 2016-04-19 13:43 0 13.5330595788194 67443881 2016-04-17 04:02 34.7 29.4085244821991 67386427 2016-04-16 18:21 30.5 22.4451511247799 67294391 2016-04-15 16:01 63.2 28.4298699832563 67261793 2016-04-15 03:00 78.9 21.1463127387037 67260283 2016-04-14 22:54 25.5 14.19514146147 67231781 2016-04-14 22:01 80.1 28.4298699832563 67150679 2016-04-14 19:44 73.4 19.9312860142 67122607 2016-04-14 12:02 62.9 26.7808856934549 67122481 2016-04-14 04:58 0.9 0.448497477546292 [/CODE] |
25,000 contributors
If I am interpreting this report correctly we have now had over 25,000 different contributers to GIMPS
[url]http://www.mersenne.org/account/?details=1[/url] Intersection of "Overall" and "of" from the Lifetime Stats report |
[QUOTE=retina;431974]Would it/they have not checked-in said other assignment and that would show in the results/logs? :confused:[/QUOTE]
Well sure, if I had thought of that at the time. :smile: The user checked in more progress, so now I have a delta to work from. With only two check-ins this isn't perfect, but it's 1.9582% progress per day, so I estimate 48.8 days to completion (it's at 3.5% done now). So... not as bad as it could have been. |
[QUOTE=petrw1;432003]If I am interpreting this report correctly we have now had over 25,000 different contributers to GIMPS
[url]http://www.mersenne.org/account/?details=1[/url] Intersection of "Overall" and "of" from the Lifetime Stats report[/QUOTE] It's more than that... hmmm. I don't have the query in front of me that pulls that data, but it seems like it would be a subset... something to look at later on I suppose. Beyond 25K users though, the rankings and work put in are all pretty much the same, just a low white noise. |
[QUOTE=Prime95;431990]
[CODE] expo assigned date % complete GHz-days/day/worker 67150679 2016-04-14 19:44 73.4 19.9312860142 [/CODE][/QUOTE] Turns out this was assigned to one of my dream machine Skylakes. Completed 6 days after assignment. |
| All times are UTC. The time now is 21:12. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.