![]() |
|
|
#2333 | |
|
Serpentine Vermin Jar
Jul 2014
331310 Posts |
Quote:
Then it's just a matter of picking the smallest available assignment above that base value. Either that assignment part or the churn-and-cogitate could or should include a fudge factor since we're not dealing with an exact science, but you get the idea. In a sense that would micro-categorize and do away with the broad category 1-4 anyway. Well, maybe what we call cat 1 and 2 should still be reserved for the fastest and most reliable systems... those without expirations or bad results, but for the rest of the machines they just get what we think they can finish, and for new machines without a track record, they'd start out at what is now cat 4 but then after turning in some work they'd automatically start getting the smaller assignments. Hmm... well, food for thought there. The nice thing is, something like this *could* be inserted into the existing assignment code, where it would use this new metric as an additional input in the decision tree, perhaps just replacing that opt-in "get preferred assignments" flag for now. Baby steps. |
|
|
|
|
|
|
#2334 | |
|
Undefined
"The unspeakable one"
Jun 2006
My evil lair
185416 Posts |
Quote:
Last fiddled with by retina on 2016-02-28 at 23:35 |
|
|
|
|
|
|
#2335 | |
|
Serpentine Vermin Jar
Jul 2014
3,313 Posts |
Quote:
I did a little thought experiment and saw that, for example, 63349229 would take ~ 149 GHz days, so a machine that did 150 GHz-days in the past 90 days would have been able to do that one. What I neglected to consider was that the next *available* exponent above that is in the 67M range and would take ~ 160 GHz-days, in which case this machine would no longer be the best match. I suppose what I really should have been thinking about was the smallest *available* exponent it could complete in 90 days to start with. But now that I've thought about this more, if I went by that, we'd have a bunch of machines getting exponents that could potentially take them the full 90 days with little margin for error. Thus my "fudge factor" to mix in there... so if a system cleared 200 GHz-days in the past 90 days, let the fudge factor adjust that up/down by 5-10% or something, just based on how things eventually work out. Okay, so I haven't really thought out the implementation *that* much...
|
|
|
|
|
|
|
#2336 |
|
Undefined
"The unspeakable one"
Jun 2006
My evil lair
185416 Posts |
Well the smallest exponent a machine could do would be 2. I still get the feeling you meant to say something like: the largest exponent a machine could do within XX days and pick one below that.
Last fiddled with by retina on 2016-02-29 at 03:41 |
|
|
|
|
|
#2337 | |
|
P90 years forever!
Aug 2002
Yeehaw, FL
19·397 Posts |
Quote:
I'd suggest something simple such as either 1) Any machine that has contributed more than X GHz-days in the last N days is upgraded to cat 2 assignments. or 2) Nightly sort cpus by GHz-days produced in the last N days and the top Y CPUs are automatically upgraded to category 2. The two are similar, but the advantage of the second system is it auto-adjusts over time. The rather minor downside to auto-cat-2 assignment upgrades is that a user will have only 150 days to complete an assignment where he may have expected 270 days. |
|
|
|
|
|
|
#2338 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2×67×73 Posts |
Quote:
If Aaron gets the heuristics correct, almost all candidates which are assigned to "Awesome" machines which were auto-upgraded would complete well before the 270 day deadline. Let's be honest here: ~30 Cat 1 completions a day suggests strongly that something isn't optimal with the current opt-in system.... |
|
|
|
|
|
|
#2339 |
|
Einyen
Dec 2003
Denmark
1100010101102 Posts |
We just have to set the ratio X Ghz-days in N days higher than the current Cat 2 exponents Ghz-days / 150 days.
Last fiddled with by ATH on 2016-02-29 at 08:38 |
|
|
|
|
|
#2340 |
|
Feb 2012
40510 Posts |
How many primes had been Cat 1 assignments?
If none, then I do not want any. LOL. Yes I know, past performance does not guarantee future results. |
|
|
|
|
|
#2341 |
|
Aug 2012
Mass., USA
2×3×53 Posts |
Well, there has only been one Mersenne prime (M74207281) discovered since the category system was created, and it was Cat 4. So I would say the answer is none.
At the time the category system was created, I believe M57885161 was in the Cat 2 range, but it was discovered prime about a year earlier, so it might have been Cat 3 or even Cat 4 if we were to try extrapolate what its category would have been if the category system was put in place earlier. I'll leave it to someone else to try and figure out if any of the other ones might have been in the top 3000/4000/5000 (whichever you want to choose as the Cat 1 limit) at the time those were assigned. |
|
|
|
|
|
#2342 |
|
Feb 2012
34·5 Posts |
cuBerBruce, awesome, thank you for that analysis. I want category 4 assignments only, then. LOL
I think that is what I get when I reserve anonymously. |
|
|
|
|
|
#2343 | |
|
"Nathan"
Jul 2008
Maryland, USA
21338 Posts |
Quote:
When M57885161 was discovered, the first-LL minimum was between 44 and 45 million. M57885161 would have been a Cat 4 assignment at that point. When M42643801 was discovered, the first-LL minimum was between 26 and 27 million. Cat 4 again. The "twins" of August and September 2008 - M37156667 and M43112609 - were discovered when the first-LL minimum was between 21 and 22 million. Cat 4 again. Every other prime from there back to M20996011 in November 2003 also looks as though it would have been far enough above the first-LL minimum to have been a Cat 4. M13466917 came when the first-LL minimum was between 8 and 9 million. Back in time this far, it is difficult to guess the actual number of first-time tests that would have been needed vs. factors found. Based on what we know today, there are ~108,000 unfactored candidates between ~8.5 million and 13,466,917. This puts M13466917 near the Cat 3/Cat 4 borderline. M6972593 was discovered when the first-LL minimum was between 3 and 4 million. This would have probably been a Cat 3 assignment. M3021377 was discovered when the first-LL minimum was between 1 and 2 million. Cat 3. M2976221 was also discovered when the first-LL minimum was between 1 and 2 million but we can probably conclude (its discovery being five months earlier than that of M3021377) that M2976221 came when the first-LL minimum was closer to 1 million than in the case of M3021377. I still doubt that this would have been within 10,000 exponents of the first-LL minimum, however, so I would also brand M2976221 a Cat 3. M1398269 was discovered when the first-LL minimum was still in six figures (indeed, everything below M756839 was not LLed at least once until January 15, 1997, two months after the discovery of M1398269). If we assume a roughly linear progression from M2 to M756839 during the first year of GIMPS, we peg the first-LL minimum right around 631,700. Today we have ~14,000 unfactored candidates between M631700 and M1398269. There would have been even more (but still <100,000) such candidates back in late 1996. Therefore, M1398269 would have been Cat 3. The moral of the story? Mersenne prime discoverers probably aren't milestone watchers, nor do they shy away from the higher exponents. |
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Newer X64 build needed | Googulator | Msieve | 73 | 2020-08-30 07:47 |
| Performance of cuda-ecm on newer hardware? | fivemack | GMP-ECM | 14 | 2015-02-12 20:10 |
| Cause this don't belong in the milestone thread | bcp19 | Data | 30 | 2012-09-08 15:09 |
| Newer msieves are slow on Core i7 | mklasson | Msieve | 9 | 2009-02-18 12:58 |
| Use of large memory pages possible with newer linux kernels | Dresdenboy | Software | 3 | 2003-12-08 14:47 |