![]() |
I sent him a copy of Greg's messages and told him again to create an account here. Also told him to contact Lionel to ask for the access to the files in case he wants to do some post-processing. I think he will because all his machines are "Test lab" equipments and he is stressing them. I told him that msieve will push even harder for the cpu's, I even said it was harder than using LLR.
|
During Lanczos, every thread has some idle time, so the core temperatures will be quite a bit lower than during LLR. Some other subsystems (memory and memory controller) will be tortured harder. If there's a separate sensor on CPU NB, it will show some heat. As torture goes, these tests are complimentary.
|
[QUOTE=Batalov;308016]During Lanczos, every thread has some idle time, so the core temperatures will be quite a bit lower than during LLR. Some other subsystems (memory and memory controller) will be tortured harder. If there's a separate sensor on CPU NB, it will show some heat. As torture goes, these tests are complimentary.[/QUOTE]
The temps are lower but the CPU when overcloked fails faster than with LLR. Do those machines have the ability to be overclocked or they run only a stock frequency? Those are blades, etc... |
I would like to see a company that overclocks servers! :rakes:
|
[QUOTE=Batalov;308018]I would like to see a company that overclocks servers! :rakes:[/QUOTE]
I know, it was an ignorant question. |
Note that MPI msieve needs careful tuning when run on a large SMP machine, because it's easy for the OS to not correctly balance the load across all the cores, and easy for the OS to shuffle MPI processes around after they have allocated their memory. frmky has reported that you have to disable even cron processes, fivemack has [url="http://fivemack.livejournal.com/226160.html"]a post[/url] on what he had to do.
|
[QUOTE=jrk;307786]Reserving 44371_43_minus1[/QUOTE]
Done. Factors are in the OPN factors thread. |
I've just started 389_95_minus1 to test how MPI works on Dual CPU server Xeon E5620. For matrix 3.6M^2 I've got:
ETA 19hrs - MPI version (4x4) ETA 23hrs - 16 threads ETA 28hrs - 8 threads ETA 31hrs - 4 threads When the MPI version works, the server seems *very busy*. |
[QUOTE=unconnected;308119]I've just started 389_95_minus1 to test how MPI works on Dual CPU server Xeon E5620. For matrix 3.6M^2 I've got:
ETA 19hrs - MPI version (4x4) ETA 23hrs - 16 threads ETA 28hrs - 8 threads ETA 31hrs - 4 threads When the MPI version works, the server seems *very busy*.[/QUOTE] What about running two MPI works with the same number of threads each? E5620 is a 4C/8T processor, when you say threads you mean threads or cores? You got a dual so you have 8C/16T... |
When I say threads I mean threads, not cores.
BTW, postprocessing is completed: [CODE]prp75 factor: 268853625856147421086944544035343580437649850239242201121922967793590837891 prp105 factor: 586175223206963889564997276172745574126823744066460572905036034158781676507405862687249466261669679801371 [/CODE] |
[QUOTE=unconnected;308119]
ETA 28hrs - 8 threads ETA 31hrs - 4 threads [/QUOTE] With your results we can say that in terms of energy efficiency it's better to dedicate 4 threads to each number. When you double the number of threads the ETA doesn't go to half. I conclude it's better to run factorizations in parallel on a Dual CPU server Xeon E5620. |
| All times are UTC. The time now is 21:52. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.