mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware > GPU Computing

Reply
 
Thread Tools
Old 2011-01-25, 23:57   #1
Flatlander
I quite division it
 
Flatlander's Avatar
 
"Chris"
Feb 2005
England

31·67 Posts
Default Help choosing motherboard please.

Hi
I'm getting together a shopping list to build an i7-2600k box but want to make it reasonably future-proof as far as GPU computing is concerned. If LLR on a GPU is going to be running at multiple-CPU-core speed then I want to have the option of installing a quality GPU to take advantage at a reasonable price.
In the past I have only bought cheap GPUs, even using onboard graphics where possible, so I am a total newbie!

Questions:
1) If I understand the llrCuda thread, the speed is comparable to a single CPU core. Is it now just a matter of tweaking or is there a possibility that speed could improve x-fold in the near future?

Similarly:
2) I've narrowed it down to 3 Gigabyte boards that sound good. From an llrCuda point of view (not-sieving) should I just get the cheapest of these three or would it be wise to pay more?

Gigabyte P67 boards here

Code:
GA-P67A-UD3 £112.73
1 x PCI Express x16 slot, running at x16 (PCIEX16)
* For optimum performance, if only one PCI Express graphics card is to be installed, be sure to install it in the PCIEX16 slot.
1 x PCI Express x16 slot, running at x4 (PCIEX4)
* When the PCIEX1_2 or PCIEX1_3 slot is populated with an expansion card, the PCIEX4 slot will operate at up to x1 mode.
3 x PCI Express x1 slots
(All PCI Express slots conform to PCI Express 2.0 standard.)
...

GA-P67A-UD3P £138.26
1 x PCI Express x16 slot, running at x16 (PCIEX16)
* For optimum performance, if only one PCI Express graphics card is to be installed, be sure to install it in the PCIEX16 slot.
1 x PCI Express x16 slot, running at x4 (PCIEX4)
* The PCIe x1 slots share bandwidth with the PCIEX4 slot. When one of the PCIe x1 slots is populated, the PCIEX4 slot will operate at up to x1 mode.
3 x PCI Express x1 slots
(All PCI Express slots conform to PCI Express 2.0 standard.)
...


GA-P67A-UD4 £153.47
1 x PCI Express x16 slot, running at x16 (PCIEX16)
* For optimum performance, if only one PCI Express graphics card is to be installed, be sure to install it in the PCIEX16 slot.
1 x PCI Express x16 slot, running at x8 (PCIEX8) * The PCIEX8 slot shares bandwidth with the PCIEX16 slot. When the PCIEX8 slot is populated, the PCIEX16 slot will operate at up to x8 mode.
3 x PCI Express x1 slots
(All PCI Express slots conform to PCI Express 2.0 standard.)
...
Flatlander is offline   Reply With Quote
Old 2011-01-26, 00:09   #2
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

3×2,083 Posts
Default

1. llrCUDA speed will probably improve multi-fold in the near future. It's still in the early development phase so there are a lot of "big" optimizations yet to be applied (if CUDALucas for LL tests has been any indication).

2. Believe it or not, strictly on the basis of the PCIe slots, the first (and cheapest) board looks best. Its x16 slot has its bandwidth all to itself (definitely important for a GPU) and the x4 slot also has full bandwidth as long as you don't use the second or third x1 slots. The second board is similar, except that all three x1 slots share bandwidth with the x4 slot (not quite as good). The third board should be OK as long as you don't plan on using the x8 slot at all. All in all, I'd go with the first one unless you plan to add more than one x1 expansion card (sound card, TV tuner, extra USB ports, etc.), in which case you'll want the third.
mdettweiler is offline   Reply With Quote
Old 2011-01-26, 00:42   #3
Flatlander
I quite division it
 
Flatlander's Avatar
 
"Chris"
Feb 2005
England

40358 Posts
Default

Cheap is good!
So I would be running 2 cards, one for cuda, one for the display? (The on-die GPU being disabled due to overclocking.)
Flatlander is offline   Reply With Quote
Old 2011-01-26, 00:46   #4
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

3×2,083 Posts
Default

Quote:
Originally Posted by Flatlander View Post
Cheap is good!
So I would be running 2 cards, one for cuda, one for the display? (The on-die GPU being disabled due to overclocking.)
You could do it that way, or you could just run the monitor off the CUDA card. I've heard that some CUDA apps will slow the graphics to a halt, but in my experience working with Gary's GTX 460, there doesn't seem to be a huge impact. (This may be because the applications being used, namely ppsieve and CUDALucas, were not as fully optimized as they could be, leaving enough "room" for the graphics to operate smoothly. I expect llrCUDA will behave similarly, though ppsieve/tpsieve would depend on how much they's been optimized since then.)

Note also that some BIOSes will turn off any GPUs that don't have a monitor plugged into them. (That's what Gary ran into--he was originally going to run the monitor off the integrated graphics.) This should be configurable somewhere in the BIOS, but it can be as confusing as all heck if you're not aware of it.

If you have a spare graphics card (doesn't have to be fancy) sitting around, you may as well use that to run the monitor and be sure that the graphics won't freeze no matter what crunching app you run. If you plan to do this, you should stay away from mobo #3--its x8 PCIe slot (the one you'd want to stick the monitor's graphics card into) shares its bandwidth with the x16 slot, so running cards in both would impact the performance of the crunching GPU.

Last fiddled with by mdettweiler on 2011-01-26 at 00:50
mdettweiler is offline   Reply With Quote
Old 2011-01-26, 08:15   #5
nucleon
 
nucleon's Avatar
 
Mar 2003
Melbourne

5×103 Posts
Default

Generally on the display freezing front - any of the modern CUDA cards (GTX4xx, GTX 5xx) don't seem to have the freezing issues. I have a GTX460 using the TF CUDA app. And I'm having no display freezing issues. Only one issue is that if the display is doing something other than a static image (say playing a video or other 3d effect), the performance of the CUDA app can drop up to 20%.

I can even play top end 3d games (starcraft2) and the app is happily working away in the background using spare GPU cycles. The game played fine.

If money is an issue - you can always get something like a GTX460/560 and if the performance isn't what you expect, you can buy a cheap video card into the second slot at a later date.

BTW I should add I'm using a core i7-930 on win7 64bit.

-- Craig

Last fiddled with by nucleon on 2011-01-26 at 08:16 Reason: added btw bit
nucleon is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Choosing Between Multiple Poly Files EdH Msieve 10 2018-03-15 03:16
Choosing the best polynomial wombatman Msieve 123 2013-08-27 11:27
Choosing the best CPU for sieving siew Factoring 14 2010-02-27 10:07
MPQS: choosing a good polynomial ThiloHarich Factoring 4 2006-09-05 07:51
Choosing amount of memory azhad Software 2 2004-10-16 16:41

All times are UTC. The time now is 19:43.

Fri May 7 19:43:56 UTC 2021 up 29 days, 14:24, 0 users, load averages: 2.81, 2.46, 2.41

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.