![]() |
Any option on server side to send a few wus to clients so to have some buffer or the only option would be to increase sieve range once again to 4k or even 8k.
|
Is there any way that the [url]http://factoring.cloudygo.com[/url] could record 'relations in last 24 hours' as well as 'relations since the project started' - it would make it rather easier to see what happens when configurations change.
|
[QUOTE=pinhodecarlos;519822]Any option on server side to send a few wus to clients so to have some buffer or the only option would be to increase sieve range once again to 4k or even 8k.[/QUOTE]
None that I know of for buffering. What does "only option" refer to? As for workunit size, we'll be changing that to 8k or 10k when we move to I=15, as yield drops from 6.x on I=16 at ~200M to 2.2 on I=15 at ~300M. So far, we have sieved Q from 8M to 68M for 533M raw relations, for an average yield of 8.9. The yield on current WUs is in the mid-8's. I'll run remdups shortly to see what our duplicate rate looks like. EDIT: For Q from 8-60M, 349.5M unique, 118.5 duplicates, 0 bad relations, total 468M relations. Duplicate rate 25%, better than the C206 that Greg ran on 16e/33 for us (that one was 792M uniq, 373M dup). I'll run it again when Q reaches 70M. |
[QUOTE=fivemack;519837]Is there any way that the [url]http://factoring.cloudygo.com[/url] could record 'relations in last 24 hours' as well as 'relations since the project started' - it would make it rather easier to see what happens when configurations change.[/QUOTE]
I'd like this too. Perhaps replace the timestamp of last workunit with this stat? We can see the project-difficulty trend as our numbers gently slide, see who is new and rising fast, and as fivemack said easily quantify changes in client configuration. |
[QUOTE=VBCurtis;519854]I'd like this too. Perhaps replace the timestamp of last workunit with this stat? We can see the project-difficulty trend as our numbers gently slide, see who is new and rising fast, and as fivemack said easily quantify changes in client configuration.[/QUOTE]
I'll try to add it to the top line and graph page tonight. |
[QUOTE=SethTro;519857]I'll try to add it to the top line and graph page tonight.[/QUOTE]
Done [url]http://factoring.cloudygo.com/progress/2330L.c207/daily_r[/url] (which is under Charts tab) also a number on the first page. [url]http://factoring.cloudygo.com/[/url] |
[QUOTE=SethTro;519863]Done
[url]http://factoring.cloudygo.com/progress/2330L.c207/daily_r[/url] (which is under Charts tab) also a number on the first page. [url]http://factoring.cloudygo.com/[/url][/QUOTE] The daily_r curve is useful, because differentiating the total-work-done curve by eye isn't really possible; thank you I think it would be even more useful to have the last-24-hours figure at a per-client level. |
What about associating the clients to teams?
|
Some thoughts on hyper-threading and -t lots
Median runtimes in various configurations, on the same hardware (fortunately I have three identical computers)
[code] One job -t32 1090s = 2180s for two Two jobs -t8 2132s/2 Two jobs -t16 taskset 0-15; 16-31 1742s/2 Two jobs -t16 taskset 0-7,16-23; 8-15,24-31 1915s/2 [/code] So, on these dual-socket eight-core machines, the right answer is to run two jobs, one across both sockets and the other on the other hyperthread across both sockets ; I think I'd expected two jobs to be better than one but am a bit surprised that having both jobs use both sockets is significantly better. |
We've reached Q=70M, so I re-ran remdups:
[code]Q Unique Dup Total 8-60M 349.5 118.5 468.0 8-70M 402.4 147.7 550.1[/code] Q=60-70M added 82M relations, but only 53M unique. |
[QUOTE=SethTro;519664]Vebis has ~10 clients (vebis.1, vebis.2, ...) so I joined all of those to vebis.<X>
I'm going to join all your clients into lukerichards.<comp> for the main tab and then add a new tab for all clients[/QUOTE] Is there any way to add instance-1, localhost and lrichards-pre2core to this collection as well, so I can see my complete combined stats without mental arithmetic? Thanks. |
| All times are UTC. The time now is 22:25. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.