View Single Post
Old 2017-01-05, 18:40   #5
EdH
 
EdH's Avatar
 
"Ed Hall"
Dec 2009
Adirondack Mtns

2·7·263 Posts
Default

Thanks, fivemack! This does make a difference. After exporting the policy value on the host and first slave, the msieve threads have increased their CPU usage. The host machine is up to just under 200% and the slave machine is just over 200%. During this run, your taskset query returns:
Code:
pid 8003's current affinity mask: f
I'll clear the policy and see what I get with the machine in the earlier state...

OK, taskset now returns:
Code:
pid 8850's current affinity mask: 1
and, top is back to showing <=100% for both msieve processes.

In answer to your other request, my mpi_host files follow this theme:

mpi_hosts111:
Code:
localhost slots=1

math59@192.168.0.58 slots=1
math59@192.168.0.60 slots=1
mpi_hosts221:
Code:
localhost slots=2

math59@192.168.0.58 slots=2
math59@192.168.0.60 slots=1
They appear to track directly with my grid values.

The host and the first slave are quad core and the second slave is dual core. Also, the host and first slave are maxed out at 4G, while the second slave has only 3G of RAM.

I am probably going to change the second slave for a quad core with more RAM, but the current second slave is the identical architecture as the other two. I thought that might be an advantage at this point.
EdH is offline   Reply With Quote