![]() |
Do you know why 2.1 is worse than 2.0?
Edit: So a 460's slower than a 465/470/480 by nature of its compute compatibility.... [url]http://developer.nvidia.com/cuda-gpus[/url] |
[QUOTE=Dubslow;281241]Do you know why 2.1 is worse than 2.0?
Edit: So a 460's slower than a 465/470/480 by nature of its compute compatibility.... [url]http://developer.nvidia.com/cuda-gpus[/url][/QUOTE] ILP (instruction-level parallelism) for CC 2.1. One could say that they saved one instruction scheduler. mfaktc has a lot of dependent instructions (carry flag) so ILP doesn't work here. Of course I could write a kernel without the use of the carry flag but my guess is that it is much slower on all archtictures. CC 2.x is much better than CC 1.x for mfaktc because 2.x can do int32 multiplication native while CC 1.x can't. Oliver |
Hmm. I understand the third line, but the first two are beyond me...
:) And that's why I'm not the developer. |
Timing run results #2
To James Heinrich:
This is from my partner's machine. It's an i7 920, 2811MHz, 3GB RAM, XP 32bit. I did turn off the Turbo for this run. [CODE]Asus 9600 GT, fanless GPU @ 650MHz Usage 99% Factor M54097591, No factor found, 70-71 Time/class 18.08s Total Time 4h 51m 42s Affinity not set 3,3,3 on the Streams and GridSize AllowSleep=0[/CODE] Obviously, the CPU is twiddling its non-existent thumbs waiting on this card. SP stuck at 200,000. avg wait 10,300. I just threw this in to give a low end marker. This box is going to be Win7-64 with 9GB RAM before too much longer. |
[QUOTE=TheJudger;281224]Well, this might be not so easy...[LIST][*]single instance of mfaktc will measure [B]CPU[/B] performance, not GPU performance for the highend GPUs[/LIST][/QUOTE]
What Oliver said. I have 4x GTX580s in my setup:[LIST][*]2 of them are installed in a i7-2600k@4.5GHz - last 10day average - 564.6GHz-days/day combined, sieve primes=5000, GPU usage 96-98% need more cpu :( (all 4 cores used)[*]1 installed in i7-920@2.8GHz - last 10day average 315.9GHz-days/day, sieve primes=12000, GPU load=99% (all 4 cores used)[*]1 installed in AMD FX8120@3.8GHz, - last 4 days average 201.6GHz-days/day, sieve primes=5000, GPU load=72%, (Sorry, I can't recommend this CPU at all for any reason)[/LIST] -- Craig |
[QUOTE=Dubslow;281241][url]http://developer.nvidia.com/cuda-gpus[/url][/QUOTE]Thanks, that was helpful (... although contained more than one conflicting datum; I'm not sure it's 100% accurate).
[QUOTE=TheJudger;281224]Well, this might be not so easy...[LIST][*]compute capability 1.0 (G80 chip): wont work[*]compute capability 1.1-1.3: same speed[*]compute capability 2.0: currently best GFLOPS/mfaktc performance[*]compute capability 2.1: ~20-35% slower than 2.0 for same GFLOPS[/LIST][/QUOTE]You're right. It wasn't easy, and I concur about your performance conclusions. There are many factors that affect it (from overclocked GPU speed, SievePrimes setting, CPU powering it, etc etc) so my numbers are naturally quite rough, but compiling all the results give a general pattern. I'm using these approximated multipliers for GFLOPS to GHz-days/day: v1.1-1.3 = 14.0 v2.0 = 5.0 v2.1 = 7.5 I'm pretty confident about the v1.1 results (3 very close results), less so about v2.0 and 2.1, but it's at least in the ballpark. My chart is now scaled according to the compute version: [url]http://mersenne-aries.sili.net/mfaktc.php[/url] |
Anybody have experience with the Linux [URL="http://www.nvidia.com/object/linux-display-ia32-290.10-driver.html"]290.10[/URL] driver from nVidia?
|
[QUOTE=James Heinrich;281409]I'm using these approximated multipliers for GFLOPS to GHz-days/day[/QUOTE]And by that, of course, I mean the complete opposite. :blush:
e.g. 8800GT = v1.1 @ 504 GFLOPS. 504 / 14 = 36 GHz-days/day expected. |
[QUOTE=RichD;281512]Anybody have experience with the Linux [URL="http://www.nvidia.com/object/linux-display-ia32-290.10-driver.html"]290.10[/URL] driver from nVidia?[/QUOTE]
I would love to tell you, but for whatever reason, using the nVidia install file crashes my GUI (Ubuntu 11.04). (This also applies for previous drivers as well.) |
[QUOTE=Dubslow;281590]I would love to tell you, but for whatever reason, using the nVidia install file crashes my GUI (Ubuntu 11.04). (This also applies for previous drivers as well.)[/QUOTE]
Probably a conflict between the nouveau drivers and the drivers from NVIDIA. It's not only necessary to completely disable the nouveau drivers before trying to install any NVIDIA driver, you also have to prevent the load of the kernel module for the nouveau driver by blacklisting the nouveau kernel module. This PPA might be useful for you: [URL="https://launchpad.net/%7Eubuntu-x-swat/+archive/x-updates"]https://launchpad.net/~ubuntu-x-swat/+archive/x-updates[/URL] |
Installing the nvidia-current package fixes the GUI. I remember at install time I elected to install proprietary drivers. I'll take a look though and see what I can do.
|
| All times are UTC. The time now is 23:15. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.