![]() |
|
|
#1 |
|
Jun 2010
Pennsylvania
3B316 Posts |
Hello,
Christenson over in the Operation Billion Digits forum has intrigued me with the idea of getting a GPU to perform various GIMPS/OBD functions. But in here I see threads with 939, 466, and 266 posts in them. I've spent an hour browsing through them, and the info is all over the place. Not surprising, really, since it seems to be a fast-developing area, but still intimidating if you're just starting to think about the concept.Umm, is anybody (pretty please?) working on distilling the knowledge that's been accumulated on this subject, into some kind of manual? Ideally it would provide guidance on the "What" and the "How" of doing this: 1. Which graphics cards can be used (maybe as a range, if there are too many to list individually). 2. What kinds of PCs/CPUs can handle which relevant GPUs. (Especially for older systems.) 3. Which all programs are available to use, and on which OS's. 4. What types of GIMPS/OBD work can currently be performed. 5. When shopping, what to look for both in your PC and the prospective GPU so that you don't end up buying the wrong thing. (I'm thinking of technical factors that aren't self-evident to someone who's not an electrical engineer, such as amperage, "rails" and the like.) Maybe even guidelines for determining the most powerful graphics card someone can buy without having to perform major surgery such as changing the PSU. 6. How to load/install/use the software. 7. And so on. This isn't meant to be an exhaustive list! I found this page http://mersennewiki.org/index.php/Mfaktc which is a start, but only that. Motive: I read around here that GPUs can output several times the yield of even the fastest CPUs available today. Preparation of a manual like this -- where all the most current information is put together systematically -- would lower considerably the time investment needed to get into GPU computing, thus helping to advance GIMPS/OBD (and other computing projects here, sorry I don't know that much about you) by that much. Or maybe the whole field is as yet too new and experimental, wild and woolly, to be susceptible to such systematization? That would be perfectly understandable. Rodrigo |
|
|
|
|
|
#2 | ||
|
Oct 2007
Manchester, UK
53×11 Posts |
Quote:
Almost all PCIe graphics cards are 16x slot cards, this means you will either need a 16x physical slot to put it in, or an open ended slot so that the card can hang out the back of it. Most motherboards that use PCIe will have at least one 16x slot though. There are several versions of PCIe, however any PCIe graphics card should work in any PCIe slot (so long as the card will physically fit into the slot). GPGPU computing doesn't require a specific CPU to work, however some programs use the CPU and GPU to perform tasks, while some use only the GPU and barely touch the CPU. Depending on the application used, your mileage may vary. Quote:
If you are planning on getting a graphics card that requires either of these connectors, and many graphics cards require both, make sure your PSU has them, or that you at least get some converter cables. Power supplies will sometimes say they have a 6+2 pin connector instead of an 8 pin connector, this is because it will have two detachable pins so that it can function as both a 6 pin and 8 pin connector, but not at the same time. Do not confuse the PCIe 8 pin connector with the 8 pin CPU +12V connector, the latter is sometimes refered to as a 4+4 pin connector, for similar reasons as above. They are NOT interchangeable as they are keyed differently. The 6 pin connector can supply up to 75 W of power, the 8 pin connector can supply up to 150 W of power. The ATX specification places a limit on the maximum power a single card can draw of 300 W but the recent top end dual GPU ATI cards (Radeon HD 6990) blow way past this limit and violate the ATX specification. More cards may continue this trend in future. Other high end graphics cards can really push the 300 W limit when under load, and high end CPUs can hit 150 W with a little overclocking. This is before accounting for any other parts in the system. This means a high end system with a single graphics card can easily drink 500 W of juice under full load. If you plan to upgrade, you absolutely must make sure your PSU is capable of supplying the power needed, and ideally more so there's some overhead. Generally speaking PSUs are most efficient when supplying around half their peak load, so if you build a system from scratch and you expect it to be drawing 400 - 500 W, getting a 1 kW PSU is not unreasonable. If you want to build a system with two high end GPUs in, then a 1 kW PSU should be the minimum. In some configurations you can have up to four GPUs in a single system, and in that situation you will need to think long and hard about how to power it all. Remember though, not all graphics cards will draw 300 W, so do a little research on power draw before deciding what PSU to buy. |
||
|
|
|
|
|
#3 |
|
Dec 2010
Monticello
5×359 Posts |
I'm with rodrigo: we need a sticky on GPU computing with pointers to the latest codes (not just mfaktc, but also CUDAlucas), and a quick hardware guide with the basic pointers. I'd vote for something that works like the "Server Problems " thread, where the mods trim it every time the server gets itself fixed. 30+ pages on mfaktc, with the latest code 5-6 pages back, is a significant deterrent. Rodrigo doesn't need the discussion on implementation internals...
NVIDIA addresses the power limits by using multiple power connectors, probably a good idea anyway. ATI has always been a lot less friendly to programmers than NVIDIA; NVIDIA supports CUDA, and now OpenCL. ATI only supports OpenCL. Finally, 500W (10 lightbulbs) all the time is definitely going to show up on your electric bill. Current mfaktc does use the CPU quite a bit for sieving; you will need a whole core to do this if you have a top of the line GPU; someone on one of the threads has dissed this approach and said he's writing a GPU only version, but we haven't seen it. Last fiddled with by Christenson on 2011-05-26 at 00:24 |
|
|
|
|
|
#4 | |
|
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
2·17·347 Posts |
Quote:
Paul |
|
|
|
|
|
|
#5 | |
|
Oct 2007
Manchester, UK
55F16 Posts |
Quote:
I believe the first applications of GPGPU were with Firestream for the Folding @ Home project way back in 2006, but I could well be mistaken here. |
|
|
|
|
|
|
#6 |
|
Dec 2010
Monticello
5·359 Posts |
Yup, but Firestream is significantly less popular than CUDA, with wider support. For example, it's not clear if I can run Firestream on *any* ATI/AMD Radeon GPU, and ATI drivers for *ix are slow to come compared to Windows.
|
|
|
|
|
|
#7 | |
|
Jun 2010
Pennsylvania
947 Posts |
Quote:
Thanks for the entire rundown, it was VERY informative (like the snip above) and cleared up a number of issues. I can use this as a guide as I look into what cards my PCs could take. Rodrigo |
|
|
|
|
|
|
#8 | |
|
Jun 2010
Pennsylvania
947 Posts |
Quote:
Thanks for the additional details. I'd read somewhere around here that one should look to get an NVIDIA card to do this sort of thing. I'll keep an eye on this thread and check out the other ones... starting with the short threads. ![]() Rodrigo |
|
|
|
|
|
|
#9 |
|
Dec 2009
Peine, Germany
331 Posts |
This is my private personal tiny one sheet reference. Thread's essence.
|
|
|
|
|
|
#10 | |
|
Jun 2010
Pennsylvania
3B316 Posts |
Quote:
One quick question. (I haven't gotten deep enough into the threads yet.) Under mfaktc where it refers to "Compute Capability Needed" being 1.0, 1.1, or 1.2 -- what is that? Rodrigo |
|
|
|
|
|
|
#11 | |
|
Dec 2009
Peine, Germany
331 Posts |
Quote:
|
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Putting more than one computer in a pine box | fivemack | Hardware | 9 | 2012-08-22 19:47 |
| Putting prime 95 on a large number of machines | moo | Software | 10 | 2004-12-15 13:25 |