![]() |
[QUOTE=chalsall;427687]I think this is sounding really good. Real performance metrics as observed by the server are always going to be better than what the client reports.[/QUOTE]
In the back of my mind, I have an inkling of an idea that I could pass along a machine ID to a SQL function that churns and cogitates and spits out a minimum exponent size that this particular machine could reasonably accomplish in XX days time. Then it's just a matter of picking the smallest available assignment above that base value. Either that assignment part or the churn-and-cogitate could or should include a fudge factor since we're not dealing with an exact science, but you get the idea. In a sense that would micro-categorize and do away with the broad category 1-4 anyway. Well, maybe what we call cat 1 and 2 should still be reserved for the fastest and most reliable systems... those without expirations or bad results, but for the rest of the machines they just get what we think they can finish, and for new machines without a track record, they'd start out at what is now cat 4 but then after turning in some work they'd automatically start getting the smaller assignments. Hmm... well, food for thought there. The nice thing is, something like this *could* be inserted into the existing assignment code, where it would use this new metric as an additional input in the decision tree, perhaps just replacing that opt-in "get preferred assignments" flag for now. Baby steps. |
[QUOTE=Madpoo;427695]In the back of my mind, I have an inkling of an idea that I could pass along a machine ID to a SQL function that churns and cogitates and spits out a [color=red]minimum[/color] exponent size that this particular machine could reasonably accomplish in XX days time.
Then it's just a matter of picking [strike]the smallest[/strike] [u]an[/u] available assignment [color=red]above[/color] that base value. Either that assignment part or the churn-and-cogitate could or should include a fudge factor since we're not dealing with an exact science, but you get the idea.[/QUOTE]Erm, minimum = maximum and above = below, right? Or do I misunderstand something? |
[QUOTE=retina;427707]Erm, minimum = maximum and above = below, right? Or do I misunderstand something?[/QUOTE]
Well, in the back of my brain, I had in mind figuring out the smallest exponent a machine could do in 90 days and that's the "floor", then just see what the smallest *available* one above that is. I did a little thought experiment and saw that, for example, 63349229 would take ~ 149 GHz days, so a machine that did 150 GHz-days in the past 90 days would have been able to do that one. What I neglected to consider was that the next *available* exponent above that is in the 67M range and would take ~ 160 GHz-days, in which case this machine would no longer be the best match. I suppose what I really should have been thinking about was the smallest *available* exponent it could complete in 90 days to start with. But now that I've thought about this more, if I went by that, we'd have a bunch of machines getting exponents that could potentially take them the full 90 days with little margin for error. Thus my "fudge factor" to mix in there... so if a system cleared 200 GHz-days in the past 90 days, let the fudge factor adjust that up/down by 5-10% or something, just based on how things eventually work out. Okay, so I haven't really thought out the implementation *that* much... :smile: |
[QUOTE=Madpoo;427736]Well, in the back of my brain, I had in mind figuring out the smallest exponent a machine could do in 90 days and that's the "floor", then just see what the smallest *available* one above that is.[/QUOTE]Well the smallest exponent a machine could do would be 2. I still get the feeling you meant to say something like: the [i]largest[/i] exponent a machine could do within XX days and pick one [i]below[/i] that.
|
[QUOTE=Madpoo;427736]
I did a little thought experiment and saw that, for example, 63349229 would take ~ 149 GHz days, so a machine that did 150 GHz-days in the past 90 days would have been able to do that one. [/QUOTE] I think you are overthinking / fine-tuning this too much. I'd suggest something simple such as either 1) Any machine that has contributed more than X GHz-days in the last N days is upgraded to cat 2 assignments. or 2) Nightly sort cpus by GHz-days produced in the last N days and the top Y CPUs are automatically upgraded to category 2. The two are similar, but the advantage of the second system is it auto-adjusts over time. The rather minor downside to auto-cat-2 assignment upgrades is that a user will have only 150 days to complete an assignment where he may have expected 270 days. |
[QUOTE=Prime95;427741]The rather minor downside to auto-cat-2 assignment upgrades is that a user will have only 150 days to complete an assignment where he may have expected 270 days.[/QUOTE]
What if those who are "auto-upgraded" get the 270 day window "promised" by Primenet's current assignment rules for those who haven't clicked the obscure "opt-in" button? If Aaron gets the heuristics correct, almost all candidates which are assigned to "Awesome" machines which were auto-upgraded would complete well before the 270 day deadline. Let's be honest here: ~30 Cat 1 completions a day suggests strongly that something isn't optimal with the current opt-in system.... |
[QUOTE=Prime95;427741]The rather minor downside to auto-cat-2 assignment upgrades is that a user will have only 150 days to complete an assignment where he may have expected 270 days.[/QUOTE]
We just have to set the ratio X Ghz-days in N days higher than the current Cat 2 exponents Ghz-days / 150 days. |
How many primes had been Cat 1 assignments?
If none, then I do not want any. LOL. Yes I know, past performance does not guarantee future results. |
[QUOTE=TObject;427787]How many primes had been Cat 1 assignments?[/QUOTE]
Well, there has only been one Mersenne prime (M74207281) discovered since the category system was created, and it was Cat 4. So I would say the answer is none. At the time the category system was created, I believe M57885161 was in the Cat 2 range, but it was discovered prime about a year earlier, so it might have been Cat 3 or even Cat 4 if we were to try extrapolate what its category would have been if the category system was put in place earlier. I'll leave it to someone else to try and figure out if any of the other ones might have been in the top 3000/4000/5000 (whichever you want to choose as the Cat 1 limit) at the time those were assigned. |
cuBerBruce, awesome, thank you for that analysis. I want category 4 assignments only, then. LOL
I think that is what I get when I reserve anonymously. |
[QUOTE=cuBerBruce;427824]Well, there has only been one Mersenne prime (M74207281) discovered since the category system was created, and it was Cat 4. So I would say the answer is none.
At the time the category system was created, I believe M57885161 was in the Cat 2 range, but it was discovered prime about a year earlier, so it might have been Cat 3 or even Cat 4 if we were to try extrapolate what its category would have been if the category system was put in place earlier. I'll leave it to someone else to try and figure out if any of the other ones might have been in the top 3000/4000/5000 (whichever you want to choose as the Cat 1 limit) at the time those were assigned.[/QUOTE] One way of extrapolating this is to examine where the first-LL minimum would have been in relation to the prime discoveries. When M57885161 was discovered, the first-LL minimum was between 44 and 45 million. M57885161 would have been a Cat 4 assignment at that point. When M42643801 was discovered, the first-LL minimum was between 26 and 27 million. Cat 4 again. The "twins" of August and September 2008 - M37156667 and M43112609 - were discovered when the first-LL minimum was between 21 and 22 million. Cat 4 again. Every other prime from there back to M20996011 in November 2003 also looks as though it would have been far enough above the first-LL minimum to have been a Cat 4. M13466917 came when the first-LL minimum was between 8 and 9 million. Back in time this far, it is difficult to guess the actual number of first-time tests that would have been needed vs. factors found. Based on what we know today, there are ~108,000 unfactored candidates between ~8.5 million and 13,466,917. This puts M13466917 near the Cat 3/Cat 4 borderline. M6972593 was discovered when the first-LL minimum was between 3 and 4 million. This would have probably been a Cat 3 assignment. M3021377 was discovered when the first-LL minimum was between 1 and 2 million. Cat 3. M2976221 was also discovered when the first-LL minimum was between 1 and 2 million but we can probably conclude (its discovery being five months earlier than that of M3021377) that M2976221 came when the first-LL minimum was closer to 1 million than in the case of M3021377. I still doubt that this would have been within 10,000 exponents of the first-LL minimum, however, so I would also brand M2976221 a Cat 3. M1398269 was discovered when the first-LL minimum was still in six figures (indeed, everything below M756839 was not LLed at least once until January 15, 1997, two months after the discovery of M1398269). If we assume a roughly linear progression from M2 to M756839 during the first year of GIMPS, we peg the first-LL minimum right around 631,700. Today we have ~14,000 unfactored candidates between M631700 and M1398269. There would have been even more (but still <100,000) such candidates back in late 1996. Therefore, M1398269 would have been Cat 3. The moral of the story? Mersenne prime discoverers probably aren't milestone watchers, nor do they shy away from the higher exponents. |
| All times are UTC. The time now is 23:13. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.