![]() |
![]() |
#243 | |
"Luke Richards"
Jan 2018
Birmingham, UK
28810 Posts |
![]() Quote:
instance-1 was running 4 threads on an 8-cpu machine, so while throughput per-thread increased, there were 4 unused cores for the entire time it was running. So essentially, the workrate of this instance is twice what it actually was in terms of rate-per-core. Having done a few tests of various different configurations of threads over the weekend, have switched back to 8-threaded work. Perhaps I would get more out of 2 clients each running 4 threads, but right now I can't be doing with adjusting my set up to account for this. |
|
![]() |
![]() |
![]() |
#244 | |
"Curtis"
Feb 2005
Riverside, CA
2·3·7·113 Posts |
![]() Quote:
Thanks for the HT speed correction; I estimated 20% improvement in previous personal tests on smaller CADO jobs, and also misremembered your measurements. |
|
![]() |
![]() |
![]() |
#245 |
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
488710 Posts |
![]()
Those 15% were related to the project progress, not HT....lol 20% is accurate on increase of output with HT on.
Last fiddled with by pinhodecarlos on 2019-06-17 at 17:41 |
![]() |
![]() |
![]() |
#246 |
"Curtis"
Feb 2005
Riverside, CA
2×3×7×113 Posts |
![]() ![]() The server seems to have ceased sending workunits; a couple of my clients are stalled, while others are still crunching. The terminal housing the CADO server lost contact, but trying to restart the server gives me an error that it's already running. Hopefully I'll have this fixed in a few minutes; it's not obvious to me how to kill the server process. EDIT: success! I killed python processes randomly until CADO let me restart. It's back up now. Last fiddled with by VBCurtis on 2019-06-18 at 15:39 |
![]() |
![]() |
![]() |
#247 |
"Luke Richards"
Jan 2018
Birmingham, UK
25·32 Posts |
![]()
How about a tab with the same table as "Client stats" but where clients are grouped by user?
Because of my testing of various configurations I've got over half a dozen different clients on the main table, but I'd be interested to see my overall contribution without having to add it up manually. |
![]() |
![]() |
![]() |
#248 | |
"Seth"
Apr 2019
4158 Posts |
![]() Quote:
I'm going to join all your clients into lukerichards.<comp> for the main tab and then add a new tab for all clients |
|
![]() |
![]() |
![]() |
#249 |
"Seth"
Apr 2019
1000011012 Posts |
![]()
I added an individual client tab.
@VBCurtis if you have logs from a past run would you mind if I also visualize them? I'd like to supporting displaying multiple factoring efforts. |
![]() |
![]() |
![]() |
#250 |
(loop (#_fork))
Feb 2006
Cambridge, England
13×491 Posts |
![]()
The server seems to have been unhappy since 21/6 1543Z and is still unhappy at 1613Z
Would there be any possibility of hosting the server in the AWS cloud, or is twelve cents a gigabyte impractical? 2.7 billion relations at 78 bytes per relation.gz is fifty dollars assuming you have to pay once to get them in and again to get them out again |
![]() |
![]() |
![]() |
#251 |
Sep 2008
Kansas
CF616 Posts |
![]()
Another thought, in addition to the above, can we get another server going for the second half of the project to run simultaneously? We could pull in small memory machines in addition to having a backup connection should one server go AWOL.
|
![]() |
![]() |
![]() |
#252 |
"Curtis"
Feb 2005
Riverside, CA
2·3·7·113 Posts |
![]()
The server is back up; each time CADO has crashed has been the software, not the hardware; I have not rebooted the system, and my other SSH connections to the same machine are still live.
This time, as last, the server instance was still running; restarting produced an error, and I had to manually kill python instances to get CADO to reboot. Apologies for the delay in giving it a kick; I was in Vegas for 3 days, checking twice a day remotely via Seth's webpage but hadn't checked since this morning. As for moving it to AWS, I have zero experience with cloud use. If that is the desired pathway, someone else will have to manage the rest of the job. Similarly, I am not confident with merging the relations from two CADO runs- else I would have started the I=16 and I=15 runs at nearly the same time. On the bright side, yield so far is much higher than my test-sieves indicated, so we will be able to transition to I=15 earlier than I expected (say, 180M or 200M rather than 220-250M). |
![]() |
![]() |
![]() |
#253 | |
(loop (#_fork))
Feb 2006
Cambridge, England
13·491 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Coordination thread for redoing P-1 factoring | ixfd64 | Lone Mersenne Hunters | 81 | 2021-04-17 20:47 |
big job planning | henryzz | Cunningham Tables | 16 | 2010-08-07 05:08 |
Sieving reservations and coordination | gd_barnes | No Prime Left Behind | 2 | 2008-02-16 03:28 |
Sieved files/sieving coordination | gd_barnes | Conjectures 'R Us | 32 | 2008-01-22 03:09 |
Special Project Planning | wblipp | ElevenSmooth | 2 | 2004-02-19 05:25 |