![]() |
![]() |
#276 | |
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
113568 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#277 | |
"Seth"
Apr 2019
111010112 Posts |
![]() Quote:
Maybe a 'Primes' badge for turning in a WU with a X-prime number of relations |
|
![]() |
![]() |
![]() |
#278 | |
"Seth"
Apr 2019
5×47 Posts |
![]() Quote:
I managed to get timestamps by adding Code:
|& awk '{ print strftime("%Y-%m-%d-%H:%M:%S || ", systime()), $0 }' Would I just run each configuration for a couple of hours and see how long between WUs? --- 2nd question: Is there a way to gracefully end a task? Other programs I've used all you to use ctrl+X to "quit after WU is finished". I'm turn my server off occasionally and I don't like killing 8 tasks (or ~4 amortized WU) each time I do that. 3rd question: If a WU completed but server upload failed is there an easy way to upload those results? I have about 10 WU's in this state which isn't a lot but annoys me |
|
![]() |
![]() |
![]() |
#279 |
"Curtis"
Feb 2005
Riverside, CA
22·1,151 Posts |
![]()
I know of no way to submit WUs outside of the client python; but you might have a look at the python code itself to see how submissions are handled to try to manually submit one. Be warned that WUs are reissued after a few hours, so a WU that's more than, say, 8 hours stale has already been issued and run by someone else. So, it may not be worth your time to learn how to submit a stale one.
I've been playing with # of threads and taskset myself, using your "relations in last 24 hr" as my metric. Seems easier than a timestamp, especially since each WU finds a variable number of relations so you'd have to divide relations by time every time anyway. I'm getting more production with 9 threads on a 6-core i7-5820k than I did with 6 threads, by about 10%; this blows a hole in my idea that exceeding 8 threads on one client has diminishing returns. Haven't tried 10-12 threads yet; I resumed the other tasks that also run on this machine instead. Edit: note that relations get marginally more difficult to find as Q increases, so your production rate will drift lower over the course of weeks. If you do such tests, do them consecutive days. remdups update: Q from 8-100M 545M unique, 234M duplicate, 779M total. Still zero bad relations. Duplicate rate now 30% overall, not great but yield (relations divided by Q-range) is still better than expected. We may have to bump the relations target up a bit if the duplicate rate continues to worsen (it will). Last fiddled with by VBCurtis on 2019-07-02 at 06:14 |
![]() |
![]() |
![]() |
#280 | |
Mar 2018
3×43 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#281 |
"Luke Richards"
Jan 2018
Birmingham, UK
25×32 Posts |
![]() Code:
z600 lucky 26693 weeks 6 CPU-years 1 2457 37659341 (4.2% total) 388.5 1.122 829670 2019-07-03 21:48:07,117 lukerichards-<COMP> unlucky 4411 weeks 5 2719 37466196 (4.2% total) 252.1 1.72 1026538 2019-07-03 21:45:26,759 Note the race for 7th place is hotting up. |
![]() |
![]() |
![]() |
#282 |
(loop (#_fork))
Feb 2006
Cambridge, England
7·911 Posts |
![]() |
![]() |
![]() |
![]() |
#283 |
Dec 2017
Goldsboro, NC
2A16 Posts |
![]()
I'm not familiar with the taskset command and the arguments. I've only set the CPU affinity on windows systems but assuming you want to not use hyperthreading, you need to set the affinity to only use physical CPUs (which I also thought were the even number CPUS) on a 2 CPU, 10 core/20 thread per CPU system, 20 physical CPUs, 40 logical CPUs to linux, wouldn't it be best to run 8 threads per task, each on physical CPUs, ie 2x 8 thread tasks, one on each physical CPU socket , wouldn't you use this?
taskset 0,2,4,6,8,10,12,14 (8 physical cores for task 1 on CPU 0) taskset 20,22,24,26,28,30,32,34 (8 physical cores for task 2 on CPU 1) That way memory use is isolated to the DIMM banks wired to the CPU and not have to go across the bridge? |
![]() |
![]() |
![]() |
#284 |
"Curtis"
Feb 2005
Riverside, CA
22·1,151 Posts |
![]()
Every Linux install I've used (admittedly, 90% ubuntu) has cores numbered the way fivemack explained.
Also, CADO responds well to HT use; using 20 threads for CADO (on a couple of clients) is 20-25% faster than 10 threads on a 10-core machine. My dual 10-core is using 4 5-threaded clients, with the other socket solving a large matrix. |
![]() |
![]() |
![]() |
#285 | |
Jul 2019
32 Posts |
![]() Quote:
/sys/devices/system/cpu/cpu0/topology $ grep "^" * core_id:0 core_siblings:0003ff,f0003fff core_siblings_list:0-13,28-41 physical_package_id:0 thread_siblings:000000,10000001 thread_siblings_list:0,28 So, my Dual E5-2690 V4, with 56 threads, reports cpu0 is a thread sibling with cpu28 whereas /sys/devices/system/cpu/cpu27/topology $ grep "^" * core_id:14 core_siblings:fffc00,0fffc000 core_siblings_list:14-27,42-55 physical_package_id:1 thread_siblings:8000000,08000000 thread_siblings_list:27,55 |
|
![]() |
![]() |
![]() |
#286 |
"Curtis"
Feb 2005
Riverside, CA
22·1,151 Posts |
![]()
Q from 8-120M 629M unique. I had a few (maybe ten) workunits in the 104M range that gzip puked on with "unexpected end of file". Those are excluded from the count, pending further investigation / repairing the files. If there's a simple repair-command, please suggest it.
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Coordination thread for redoing P-1 factoring | ixfd64 | Lone Mersenne Hunters | 79 | 2020-12-05 04:54 |
big job planning | henryzz | Cunningham Tables | 16 | 2010-08-07 05:08 |
Sieving reservations and coordination | gd_barnes | No Prime Left Behind | 2 | 2008-02-16 03:28 |
Sieved files/sieving coordination | gd_barnes | Conjectures 'R Us | 32 | 2008-01-22 03:09 |
Special Project Planning | wblipp | ElevenSmooth | 2 | 2004-02-19 05:25 |