![]() |
|
|
#67 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
201278 Posts |
It is a short-term rolling average. The computer must still pass the reliability/confidence/minimum-speed tests to get preferred assignments.
If the wording on the assignment rules page can be made more clear, please make suggestions. |
|
|
|
|
|
#68 | |
|
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
3·23·89 Posts |
Quote:
It seems to me that the 1 core on LL would need to be finishing 8 LL per 90 days. Either that or it needs rewording. It should be 2 IMO. |
|
|
|
|
|
|
#69 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
17·487 Posts |
|
|
|
|
|
|
#70 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
827910 Posts |
|
|
|
|
|
|
#71 |
|
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
14CD16 Posts |
Just noticed this comment on the DC rules page
"Your account is set to get the smallest exponents. All your computers must meet the requirements for returning results in a timely manner as outlined below." This concerns me. Among my dozen or so PCs most will pass but some certainly not. I suspect many others are so too. This suggests to me I am completely ineligible, though I have about 20 fast and full time cores assigned to what I had hoped would be preferred DC and LL. I expected these rules to be PC based rather than account based. |
|
|
|
|
|
#72 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
17·487 Posts |
I'll change the wording.
|
|
|
|
|
|
#73 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
17×487 Posts |
The server applied the new rules for recycling DC assignments in the top 1500.
Assignments over a year old and < 60% complete were recycled. Assignments over a year-and-a-half old were recycled. The actual SQL query was: Code:
((dt_when_assigned < '2014-03-01' AND -- Grandfathered assignment
exponent < @exp1 AND -- exponent is in the most critical category
dt_when_assigned < DATEADD (DAY, -365, GETDATE()) AND -- and assignment is over a year old
percent_done < 60 + (DATEDIFF (DAY, dt_when_assigned, GETDATE()) - 365) / 3) OR -- plus a grace period if close to finished
Last fiddled with by Prime95 on 2014-02-07 at 05:56 |
|
|
|
|
|
#74 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2·112·47 Posts |
Quote:
Modeling this a bit more, I'd like to suggest that the ranges for categories 1 and 2 might perhaps be optimally increased, assuming our goal is to "compress the 'wave'", reduce "poaching" and to try to ensure that work is completed appropriately, rather than having a slow machine waste its time and energy only to have its assignment be recycled by the system after it had already invested cycles... As an example, I personally can do about 15 DCs a day on my machines alone (and I'm by no means the fastest DC'er), so given the 1,500 cut-off for "Category 1" which must be completed within 60 days, I can do about 900 of these myself (if assigned), and thus those assigned low "Category 2" work will quickly become "Category 1" and be recycled after the 100 day limit. Meanwhile, my and other fast computers will be assigned Cat 2 or even Cat 3 assignments. Now, maybe this isn't a bad thing, and I'd have no problem taking on such work. My primary point is once the legacy assignments (more than a year old assigned before 2014.03.01) are recycled and processed, we're likely going to again find ourselves with a situation where the slower machines are once again holding up milestones (for up to 240 days) and possibly have their work wasted. Personally, I'm happy to take on whatever assignments make the most sense. But since this exercise was to help prevent hold-ups (and discourage poaching), perhaps the cut-offs should be increased. Such as (just throwing numbers out there) 5,000 for Cat 1, and 10,000 for Cat 2 (basically a bit less than our daily production rate times the number of days they are valid). Thoughts? |
|
|
|
|
|
|
#75 |
|
Romulan Interpreter
"name field"
Jun 2011
Thailand
283316 Posts |
Although what you propose makes totally sense (in my opinion), my take would be more of "let's run for a while with what we have" and see how the things progress. The limits may be adjusted later, and generally, not everybody have 96 high-profile CPU cores on his hand, as you do (talking only about the six R720 servers, and not counting the "normal" computers and workstations you control
)
|
|
|
|
|
|
#76 |
|
Dec 2002
881 Posts |
As far as I am concerned the objective of the rules is to make sure that low end exponents are handled by fast reliable machines in due time. So the main thing we have to agree on is what constitutes a 'low end' exponent. I agree that the amount of available fast reliable machines influences that.
However we are not trying to feed the fast reliable machines with low end exponents to keep them busy. We should use these machines when needed, not when available. |
|
|
|
|
|
#77 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2·112·47 Posts |
Quote:
But whatever is communally decided I'm fine with. |
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| PrimeNet Assignment Rules | S485122 | PrimeNet | 11 | 2021-05-20 14:54 |
| Modifications to DC assignment rules | Prime95 | PrimeNet | 74 | 2017-01-18 18:36 |
| Understanding assignment rules | Fred | PrimeNet | 3 | 2016-05-19 13:40 |
| Proposed LL assignment and recycle rules | Prime95 | Data | 156 | 2015-09-19 12:39 |
| Proposed TF, P-1, ECM assignment and recycle rules | Prime95 | Data | 9 | 2014-02-27 23:52 |