![]() |
|
|
#1090 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
19·397 Posts |
|
|
|
|
|
|
#1091 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
100110001101102 Posts |
Quote:
Could you please, sir, speak a little more deeply about the current debate? |
|
|
|
|
|
|
#1092 | |
|
P90 years forever!
Aug 2002
Yeehaw, FL
19×397 Posts |
Quote:
Some rambling thoughts of my own: As best we can, we need to honor our prior commitment to not recycle for a year exponents that are being actively reported. I don't think we can place much value on the %complete metric. I often queue up months of work. These exponents report no progress until they finally reach the top of the queue and then they finish quickly. We can however use the %complete metric once it goes non-zero. If an exponent starts progressing at 1% a week for an extended period of time, it is likely to take a 100 weeks. The downside is the server does not keep a history of this data. We need to come up with different strategies for DC and LL work at or near the trailing edge. LL work in the 100M area. Other DC and LL work. TF and P-1 work. ECM work. IIRC, this where my proposal 5 years ago failed. Manual assignments do need to default to a longer time to expire. When I download exponents for a GPU, I get several months worth (and extend the expiration dates). I don't want to be bothered with doing this manually every 1, 2, or even 3 months. How to proceed? Do you want me to start separate threads for each strategy with an initial proposal? |
|
|
|
|
|
|
#1093 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
165678 Posts |
I left the computer in January 2014, so we don't know. I did several tests of having it reboot after a manufactured power failure. It passed all the tests, so I have hopes it will be able to run for 4+ months straight.
|
|
|
|
|
|
#1094 |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
230668 Posts |
|
|
|
|
|
|
#1095 | |
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
9,497 Posts |
Quote:
I have retired them now. I've later used a free instance of so-ugly-it's-beautiful 32-bit linux "tiny"s which I used as a time machine to travel back to 2007 and build modified NewPGen binaries (one needs an ancient gcc-3 and accompanying static libs). |
|
|
|
|
|
|
#1096 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2·67·73 Posts |
Quote:
Slow (or often offline) computers are more than welcome. But they should be assigned work which is appropriate for their ability. Ideally the curves would cross perfectly. It's a difficult problem. But, then, we here often deal with difficult problems.... |
|
|
|
|
|
|
#1097 |
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
100101000110012 Posts |
Agreed. Clearly defined rules and open discussion are fine things to have.
(As opposed to simply poaching "because the bureaucracy will be too slow to change the rules". Here, at GIMPS, the list of bureaucrats is very short ;-) and they actually demonstrate reasonable nimbleness.) You wouldn't want to turn away any live users, no matter how slow they are. What seems to be the problem are zombies/unmanned droid type of things. You need a Turing like test to recognize their behaviors from the patterns in the existing database of accesses (which is hard because it is not very detailed) - and at the same time not hurt live users no matter how closely they might look like droids. When you set the new rules, you leave a grandfathered period for old rules, too - you are right in theory that "Slow computers ... should be assigned work which is appropriate for their ability." But they enter the pool all the time and you don't know if they are slow or not right away; and then fast forward to "now", they have the assignments and they are holding "the milestones" and if we will yank some assignments away, we will arguably have some immediate acceleration and then (depending on the fairness of the execution) piss more (or less) live users off and they will leave. Try to integrate this. Conservative changes make sense. |
|
|
|
|
|
#1098 |
|
"Mr. Meeseeks"
Jan 2012
California, USA
216810 Posts |
|
|
|
|
|
|
#1099 |
|
"Kyle"
Feb 2005
Somewhere near M52..
3·5·61 Posts |
It appears that I have been absent for a great deal of interesting conversation.
|
|
|
|
|
|
#1100 | |
|
"Richard B. Woods"
Aug 2002
Wisconsin USA
22·3·641 Posts |
Quote:
Not one single person has EVER been able to explain to me what "waste" occurs in these case! In fact there is no "waste" involved, but what there _is_ is that some people want to justify taking poaching-type (or assignment-cancelling) action by projecting their own internal feelings of impatience onto the GIMPS project. By doing this, they can pretend that GIMPS is somehow "impatient" with the progress of milestones, but that's only their own self-deception, not reality. Such people need to learn self-control, not propose rules that unjustifiably demean the contributions of "slow" systems. If anyone disagrees, then please publicly explain just how there is any "waste" when milestones are not achieved as fast as impatient people want them to be completed. Do NOT confuse project "waste" with internal feelings of impatience because of poor self-control. |
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Newer X64 build needed | Googulator | Msieve | 73 | 2020-08-30 07:47 |
| Performance of cuda-ecm on newer hardware? | fivemack | GMP-ECM | 14 | 2015-02-12 20:10 |
| Cause this don't belong in the milestone thread | bcp19 | Data | 30 | 2012-09-08 15:09 |
| Newer msieves are slow on Core i7 | mklasson | Msieve | 9 | 2009-02-18 12:58 |
| Use of large memory pages possible with newer linux kernels | Dresdenboy | Software | 3 | 2003-12-08 14:47 |