![]() |
|
|
#12 | |
|
Dec 2002
2·11·37 Posts |
Quote:
Code:
create a table named active_clients with an entry for each day from the first day of server assignments till today holding an integer representing the amount of machines counted to be active on that day. select the dates of assignment and the dates of completion of each completed LL test, DoubleCheck or Factor Found done by every unique user+machine. for each day in table active_clients: if day>date_of_assignment and day<=date_of_completion in any of the selected assignments then increase the number of active clients on that day by one. YotN, and curious, Henk Stokhorst |
|
|
|
|
|
|
#13 |
|
Jan 2003
Altitude>12,500 MSL
1458 Posts |
I think I see now - Henk is counting a machine active if it returns a result that day, assuming they should eventually achieve an average steady-state rate of results production distribution.
I have done similar analyses to learn something like 10% of the machines do 90% of the work (that's not a precise recollection), supporting Henk's general observation. A similar observation can be derived directly from the aggregate CPU rate in GFLOPS: say 8000 GFLOPS assuming 1 FLOP/Hz & average 24x7 PC is 0.5Ghz = 16,000 PCs. However from a project participation standpoint, it's more important to show the computers that are active in mid-test even if they may not be very productive. Not everyone overclocks, or runs their PC 24x7, or has a 2GHz CPU - and their participation in GIMPS is just as important from a teamwork perspective as anyone's. Consequently, so long a machine is updating the server with some kind of progress we must count it as an active machine, and differentiate rates of productivity by other means. |
|
|
|
|
|
#14 | |
|
Dec 2002
11001011102 Posts |
Quote:
Basically I want to know how many clients are started up out of whatever interest of a user, and then apparantly get abandoned for whatever reason. If we could influence that ratio... YotN, Henk. |
|
|
|
|
|
|
#15 | |
|
Jan 2003
Altitude>12,500 MSL
101 Posts |
Quote:
Owing to the gravity of its impact on the assignment population, a v4 sync has a few manual steps as checkpoints to ensure proper staging of the data before running the bulk merge procedure. Getting these steps back into gear again just took a bit longer to dust off and tweak for a new data format from George. Perhaps a quarterly v4 sync makes the most sense? |
|
|
|
|
|
|
#16 |
|
Sep 2003
5×11×47 Posts |
The main thing would be just to prevent the server from ever handing out an assignment of an already-factored or already-verified-composite exponent.
This is pretty rare, but really should never happen... |
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Low weight stats page. | jocelynl | Riesel Prime Search | 352 | 2021-01-26 12:04 |
| TPS stats page | Oddball | Twin Prime Search | 0 | 2011-10-29 18:34 |
| odd entry on stats page | mdettweiler | Prime Sierpinski Project | 3 | 2008-08-27 18:34 |
| UPDATED: The current pre-sieved range reservation thread and stats page | gribozavr | Twin Prime Search | 10 | 2007-01-19 21:06 |
| website and stats updated | wfgarnett3 | PSearch | 0 | 2004-09-09 03:15 |