mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Cunningham Tables

Reply
 
Thread Tools
Old 2019-06-29, 09:26   #276
pinhodecarlos
 
pinhodecarlos's Avatar
 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK

111728 Posts
Default

Quote:
Originally Posted by swellman View Post
I too cried Havoc!, and let slip the dogs of war. Or something. Despite its name, my machine DESKTOP-C5KKONV is also a laptop. Just chewing up WUs and spitting out relations!

It’s fun to contribute, even if it’s only a very small part. The cloudygo site keeps the effort fresh - thanks SethTro. I hope Vebis and buster return soon.
I’m running for the 1 CPU years badge.
pinhodecarlos is online now   Reply With Quote
Old 2019-07-02, 04:56   #277
SethTro
 
SethTro's Avatar
 
"Seth"
Apr 2019

2658 Posts
Default

Quote:
Originally Posted by swellman View Post
I too cried Havoc!, and let slip the dogs of war. Or something. Despite its name, my machine DESKTOP-C5KKONV is also a laptop. Just chewing up WUs and spitting out relations!

It’s fun to contribute, even if it’s only a very small part. The cloudygo site keeps the effort fresh - thanks SethTro. I hope Vebis and buster return soon.
I'm glad you like it! Let me know how it can be improved I'd love to do some more things.

Maybe a 'Primes' badge for turning in a WU with a X-prime number of relations
SethTro is offline   Reply With Quote
Old 2019-07-02, 05:25   #278
SethTro
 
SethTro's Avatar
 
"Seth"
Apr 2019

181 Posts
Default

Quote:
Originally Posted by fivemack View Post
Median runtimes in various configurations, on the same hardware (fortunately I have three identical computers)

Code:
One job -t32                         1090s = 2180s for two
Two jobs -t8                                 2132s/2
Two jobs -t16 taskset 0-15; 16-31            1742s/2
Two jobs -t16 taskset 0-7,16-23; 8-15,24-31  1915s/2
So, on these dual-socket eight-core machines, the right answer is to run two jobs, one across both sockets and the other on the other hyperthread across both sockets ; I think I'd expected two jobs to be better than one but am a bit surprised that having both jobs use both sockets is significantly better.
I'm curious what would be the best way for me to test this with only one machine.
I managed to get timestamps by adding
Code:
|& awk '{ print strftime("%Y-%m-%d-%H:%M:%S  ||  ", systime()), $0 }'
to the end of my commands

Would I just run each configuration for a couple of hours and see how long between WUs?

---

2nd question: Is there a way to gracefully end a task? Other programs I've used all you to use ctrl+X to "quit after WU is finished". I'm turn my server off occasionally and I don't like killing 8 tasks (or ~4 amortized WU) each time I do that.

3rd question: If a WU completed but server upload failed is there an easy way to upload those results? I have about 10 WU's in this state which isn't a lot but annoys me
SethTro is offline   Reply With Quote
Old 2019-07-02, 06:12   #279
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

22×1,091 Posts
Default

I know of no way to submit WUs outside of the client python; but you might have a look at the python code itself to see how submissions are handled to try to manually submit one. Be warned that WUs are reissued after a few hours, so a WU that's more than, say, 8 hours stale has already been issued and run by someone else. So, it may not be worth your time to learn how to submit a stale one.

I've been playing with # of threads and taskset myself, using your "relations in last 24 hr" as my metric. Seems easier than a timestamp, especially since each WU finds a variable number of relations so you'd have to divide relations by time every time anyway. I'm getting more production with 9 threads on a 6-core i7-5820k than I did with 6 threads, by about 10%; this blows a hole in my idea that exceeding 8 threads on one client has diminishing returns. Haven't tried 10-12 threads yet; I resumed the other tasks that also run on this machine instead.
Edit: note that relations get marginally more difficult to find as Q increases, so your production rate will drift lower over the course of weeks. If you do such tests, do them consecutive days.

remdups update: Q from 8-100M 545M unique, 234M duplicate, 779M total. Still zero bad relations. Duplicate rate now 30% overall, not great but yield (relations divided by Q-range) is still better than expected. We may have to bump the relations target up a bit if the duplicate rate continues to worsen (it will).

Last fiddled with by VBCurtis on 2019-07-02 at 06:14
VBCurtis is online now   Reply With Quote
Old 2019-07-02, 07:45   #280
DukeBG
 
Mar 2018

8116 Posts
Default

Quote:
Originally Posted by fivemack View Post
Median runtimes in various configurations, on the same hardware (fortunately I have three identical computers)

Code:
One job -t32                         1090s = 2180s for two
Two jobs -t8                                 2132s/2
Two jobs -t16 taskset 0-15; 16-31            1742s/2
Two jobs -t16 taskset 0-7,16-23; 8-15,24-31  1915s/2
So, on these dual-socket eight-core machines, the right answer is to run two jobs, one across both sockets and the other on the other hyperthread across both sockets ; I think I'd expected two jobs to be better than one but am a bit surprised that having both jobs use both sockets is significantly better.
Are you sure you're correctly specifying which cpu cores are physical and which HT? I'm more used to HT being the odd numbered and real being even numbered.
DukeBG is offline   Reply With Quote
Old 2019-07-04, 05:19   #281
lukerichards
 
lukerichards's Avatar
 
"Luke Richards"
Jan 2018
Birmingham, UK

25×32 Posts
Default

Code:
z600 lucky 26693 weeks 6 CPU-years 1	2457	37659341 (4.2% total)	388.5	1.122	829670	2019-07-03 21:48:07,117
lukerichards-<COMP> unlucky 4411 weeks 5	2719	37466196 (4.2% total)	252.1	1.72	1026538	2019-07-03 21:45:26,759

Note the race for 7th place is hotting up.
lukerichards is offline   Reply With Quote
Old 2019-07-04, 21:45   #282
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

18B016 Posts
Default

Quote:
Originally Posted by DukeBG View Post
Are you sure you're correctly specifying which cpu cores are physical and which HT? I'm more used to HT being the odd numbered and real being even numbered.
Yes I am; this is a Linux machine (two sockets, 10 cores per socket)
fivemack is offline   Reply With Quote
Old 2019-07-04, 22:48   #283
scole
 
Dec 2017
Goldsboro, NC

2·3·7 Posts
Default

I'm not familiar with the taskset command and the arguments. I've only set the CPU affinity on windows systems but assuming you want to not use hyperthreading, you need to set the affinity to only use physical CPUs (which I also thought were the even number CPUS) on a 2 CPU, 10 core/20 thread per CPU system, 20 physical CPUs, 40 logical CPUs to linux, wouldn't it be best to run 8 threads per task, each on physical CPUs, ie 2x 8 thread tasks, one on each physical CPU socket , wouldn't you use this?

taskset 0,2,4,6,8,10,12,14 (8 physical cores for task 1 on CPU 0)
taskset 20,22,24,26,28,30,32,34 (8 physical cores for task 2 on CPU 1)

That way memory use is isolated to the DIMM banks wired to the CPU and not have to go across the bridge?
scole is offline   Reply With Quote
Old 2019-07-05, 03:34   #284
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

110C16 Posts
Default

Every Linux install I've used (admittedly, 90% ubuntu) has cores numbered the way fivemack explained.

Also, CADO responds well to HT use; using 20 threads for CADO (on a couple of clients) is 20-25% faster than 10 threads on a 10-core machine. My dual 10-core is using 4 5-threaded clients, with the other socket solving a large matrix.
VBCurtis is online now   Reply With Quote
Old 2019-07-06, 13:55   #285
Mumps
 
Jul 2019

32 Posts
Default

Quote:
Originally Posted by scole View Post
I'm not familiar with the taskset command and the arguments. I've only set the CPU affinity on windows systems but assuming you want to not use hyperthreading, you need to set the affinity to only use physical CPUs (which I also thought were the even number CPUS) on a 2 CPU, 10 core/20 thread per CPU system, 20 physical CPUs, 40 logical CPUs to linux, wouldn't it be best to run 8 threads per task, each on physical CPUs, ie 2x 8 thread tasks, one on each physical CPU socket , wouldn't you use this?

taskset 0,2,4,6,8,10,12,14 (8 physical cores for task 1 on CPU 0)
taskset 20,22,24,26,28,30,32,34 (8 physical cores for task 2 on CPU 1)

That way memory use is isolated to the DIMM banks wired to the CPU and not have to go across the bridge?
On Ubuntu/MINT, you can use /sys/devices/system/cpu to verify your system topology. Each thread will have a folder in there and within that will be a folder named topology.


/sys/devices/system/cpu/cpu0/topology $ grep "^" *

core_id:0
core_siblings:0003ff,f0003fff
core_siblings_list:0-13,28-41
physical_package_id:0
thread_siblings:000000,10000001
thread_siblings_list:0,28
So, my Dual E5-2690 V4, with 56 threads, reports cpu0 is a thread sibling with cpu28 whereas
/sys/devices/system/cpu/cpu27/topology $ grep "^" *

core_id:14
core_siblings:fffc00,0fffc000
core_siblings_list:14-27,42-55
physical_package_id:1
thread_siblings:8000000,08000000
thread_siblings_list:27,55
Mumps is offline   Reply With Quote
Old 2019-07-06, 22:43   #286
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

22·1,091 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
remdups update: Q from 8-100M 545M unique, 234M duplicate, 779M total. Still zero bad relations.
Q from 8-120M 629M unique. I had a few (maybe ten) workunits in the 104M range that gzip puked on with "unexpected end of file". Those are excluded from the count, pending further investigation / repairing the files. If there's a simple repair-command, please suggest it.
VBCurtis is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Coordination thread for redoing P-1 factoring ixfd64 Lone Mersenne Hunters 50 2020-06-10 21:01
big job planning henryzz Cunningham Tables 16 2010-08-07 05:08
Sieving reservations and coordination gd_barnes No Prime Left Behind 2 2008-02-16 03:28
Sieved files/sieving coordination gd_barnes Conjectures 'R Us 32 2008-01-22 03:09
Special Project Planning wblipp ElevenSmooth 2 2004-02-19 05:25

All times are UTC. The time now is 18:22.

Tue Oct 20 18:22:23 UTC 2020 up 40 days, 15:33, 1 user, load averages: 1.78, 1.95, 2.04

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.