mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Hardware (https://www.mersenneforum.org/forumdisplay.php?f=9)
-   -   Which of these CPUs is most productive? (https://www.mersenneforum.org/showthread.php?t=14745)

Uncwilly 2011-02-05 17:24

[QUOTE=Christenson;251425]If the goal is to maximize the long-term throughput, is it better to have more threads doing disparate tasks or more helper cores pushing the tasks through fewer threads at once?[/QUOTE]Best throughput is generally achieved by having each physical core doing its own test.

Brain 2011-02-05 21:12

Cannot deactivate SMT / HT
 
[QUOTE=Uncwilly;251426]Best throughput is generally achieved by having each physical core doing its own test.[/QUOTE]
My notebook's BIOS doesn't offer disabling hyperthreading so I'm forced to run 1 test per logical core. It's an Intel 2630QM (4 physical, 8 logical).
I assume best throughput is still achieved using no helper threads? I haven't been able to compare both configurations yet.

Mini-Geek 2011-02-05 21:42

[QUOTE=Brain;251444]My notebook's BIOS doesn't offer disabling hyperthreading so I'm forced to run 1 test per logical core. It's an Intel 2630QM (4 physical, 8 logical).
I assume best throughput is still achieved using no helper threads? I haven't been able to compare both configurations yet.[/QUOTE]

For non-FFT work, (e.g. TF, sieving, NFS sieving) one worker/instance per logical core is usually best.
For FFT work (e.g. LL, P-1) one worker per physical core is usually best. You could either run 4 workers with each using two threads, (keep the default affinities, which will make each worker use both threads on its own physical core) or 4 workers with each using one thread and the affinities set to 0, 3, 5, 7. You can do this with the AffinityScramble=0357 option in prime.txt (see undoc.txt for more info). AFAIK which of the two is faster can vary and should be experimented on your machine.


All times are UTC. The time now is 19:42.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.