![]() |
SSE2 ?
Hi!
Does the siever use SSE2? If not, i would like to stay with 'my' P IVs with prime95 and use perhaps one or two Athlons here... Are there any performance comparisons (P III, Athlon, P IV, Chipsets) online? I'd like the machines help those projects that are best suited for their architecture... Thanks for providing such an interesting project here... Tau |
Right now the siever does not use SSE2 instructions.
There is no benchmarking information available online. With the NFS algorithm, sieving lines take different amount of times within the same project and also differs between projects. What we have seen so far is that the best performance/MHz comes from PowerPC processors, Athlons are very good, P4's are a little slower vs the same speed Athlon, and P3's still do quite well. Larger L2 caches help as well. The speed of the client does not scale linearly with MHz either so a CPU that has twice the MHz won't necessarily be twice as fast. Jeff. |
So, how does my PPC G4/800Mhz/256k L2/1mb L3 cache stack up, doing ~2400 lines a day on a partly used machine? Compared to say a Northwood and a Barton?
|
On the current project my Athlon MP 2600+ does about 8000 lines per day (single CPU). My P3 800 does about 4000 lines per day. I don't have any P4 scores right now.
Jeff. |
On the current project :
P4 2.4 Ghz, Dell machine: 7801 lines per day (if I do not use it for anything else.) |
About 4000 lines/day for my Athlon 1200 with PC133-RAM. I hope i will be able to test a new P IV 2.6Ghz HT with dual-channel DDR400 (i875 chipset) next week with nfsnet.
Any information so far if hyperthreading works with the software? |
[quote="TauCeti"]Any information so far if hyperthreading works with the software?[/quote]
The code is not specially written to make best use of hyperthreading. My guess is that HT will help slightly if a single client is running in the background when the machine is doing other things at the same time. I suspect that running two clients will produce relations at a slightly slower rate than running only one. The reasoning is that the client is heavily memory bound and has been written to make particularly effective use of the L1 and L2 caches. The threads from two instances of the client will be fighing over who gets the caches. It may be an interesting experiment to try. I don't have a HT processor available to carry out the experiment. A dual-proc 1GHz PIII seems to do around 3600 lines per day per cpu on the 10_227M_1 project. Our cluster at MSR Cambridge is built out of 16 such dual-procs and, when it's not running the linear algebra, I set 10 nodes (i.e., 20 cpus) as NFSNET clients. Paul |
Thanks for the input. I was not thinking of two clients running but on the impact HT has on one running client.
I have checked with an HT enabled system today. HT does not seem to make any significant difference with one client. Main-memory bandwidth _does_ seem to make a big difference. I have tested on a 3GHz i845 machine with FSB-630 (memory bandwidth about 3400MB/s) and on a 2.85GHz i875 machine with FSB-880 and 2-3-3-6 timings (memory bandwidth about 5200MB/s), Memory bandwidth was measured with SiSoft Sandra. The i845 3GHz crunches about 10200 lines/day and the 2.85GHz i875 smokes about 11400 lines/day in the current project. One other amusing number: the i875 machine draws 70W idle and 124W under load with NFSNET. Prime95 draws 137W (edit:145W was with 3.1GHz and FSB800) ;) Cheers, Tau |
[quote="TauCeti"]Main-memory bandwidth _does_ seem to make a big difference. I have tested on a 3GHz i845 machine with FSB-630 (memory bandwidth about 3400MB/s) and on a 2.85GHz i875 machine with FSB-880 and 2-3-3-6 timings (memory bandwidth about 5200MB/s), Memory bandwidth was measured with SiSoft Sandra.[/quote]
I expected there to be a difference, but I'm surprised it's that much. I don't really have enough different systems to test the variation myself, so thanks for posting these figures. Paul |
| All times are UTC. The time now is 23:53. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.