![]() |
|
|
#738 |
|
"Mike"
Aug 2002
2×23×179 Posts |
We were able to order the GPU cards from Newegg before the Friday shipping cutoff, so we should receive them Monday.
Just in case, we ordered 2 × 650W power supplies as well, but we did so later in the day and they will not be shipped until Monday, so we will not see them until Tuesday or Wednesday. (We live very close to one of Newegg's warehouses, which is in Memphis.) That said, some measurements of our systems indicate we might be okay. We guess the real question will be how much load there is, because (in our experience) the harder you run a PSU the shorter it lives. We feel comfortable with up to ¾ load. (Obviously, each rail and tap is evaluated independently.) |
|
|
|
|
|
#739 |
|
"Mike"
Aug 2002
2×23×179 Posts |
We received both GTX 570 cards yesterday and both 650W power supplies today.
We have not installed the larger power supplies yet. Running one instance of the self test uses around 215W and 25-26% processor resources with an i5 2500. We did modify the ini file to set it from 3 to 10 NumStreams. We have no clue what that means. With 3 NumStreams the processor was nearly idle. We spent all night and today trying to get the development system running under Linux. We followed every HOWTO and tried every distribution recommended by Nvidia. We were unsuccesful but we will soldier on. To our embarrassment, we ended breaking out a 64-bit copy of Windows Vista. The install was painless. We thought we had sworn off of Windows forever. Compiling mfaktc in Linux went perfectly. We think the problem we are experiencing is that we cannot "talk" to the GPU. We have /dev populated and all of the environment variables set and all of the libraries in the right places and stuff but we got very weird errors. The GPU shows up with 'lspci' and the Nvidia module shows up with 'lsmod'. We were trying to do all of this with a non-graphical install using the onboard (Intel) graphics that we think (?) are built into the processor. We tried one graphical install of Ubuntu 10.04 but when we shut down GDM the monitor complained about the feed and went into a coma. The cards themselves are very impressive. They weigh a ton and are so big it took us over an hour to install each of them. There is literally less than a millimeter of clearance on the far end, and our cases are not small ones. The angle we used to install them looks impossible. So far they are much quieter than our case fans. They came packaged in a very classy black display box, almost like a jewelry box. They take up 3 card slots and dwarf our mATX motherboard. Anyways, it is good to know they work but it is distressing that we are having so many issues with the Linux install. In an ideal world, we would like to install with Debian but most likely we will try an older version (11.1) of OpenSUSE since that is what Oliver is using. We are fairly familiar with SUSE because we used to use the for-pay SUSE Enterprise Desktop deal. Attached is a copy of a self test. Perhaps it is useful. The Windows install is a clean install with no modifications other than the Nvidia stuff. Thanks! |
|
|
|
|
|
#740 |
|
"James Heinrich"
May 2004
ex-Northern Ontario
65358 Posts |
|
|
|
|
|
|
#741 |
|
Dec 2010
Monticello
5·359 Posts |
Xyzzy,
I don't like this trend...time to get on Nvidia's support website and ask some questions.... |
|
|
|
|
|
#742 | ||
|
"Mike"
Aug 2002
823410 Posts |
Quote:
![]() Quote:
|
||
|
|
|
|
|
#743 | |
|
Bamboozled!
"𒉺𒌌𒇷𒆷𒀭"
May 2003
Down not across
22·5·72·11 Posts |
Quote:
FWIW, my system runs Fedora and CUDA without issues, but I did have to remove the nouveau driver first. Paul |
|
|
|
|
|
|
#744 |
|
Tribal Bullet
Oct 2004
3·1,181 Posts |
Considering the huge array of linux systems that their driver has to run on, Nvidia's installer does a rather remarkable job. It actually compiles their driver stub on the fly using your current kernel headers, then installs the result and tries to configure your X server (which I didn't think you had any hope of doing in an automated way). That said, I tried installing the thing on a late-model SuSe (11.4?) system, and although linux itself did see and run the card, I couldn't get CUDA programs to run. This after removing a ton of packages (including nouveau, which really didn't want to leave) and reinstalling a ton of others, over several days.
There are catherdal-and-bazaar-related lessons in here I'm sure :) |
|
|
|
|
|
#745 | |
|
Dec 2010
Monticello
5×359 Posts |
Quote:
And I hate the way things leave experts clueless, with needless complexity -- but that wasn't the thought here. Last fiddled with by Christenson on 2011-04-20 at 16:38 |
|
|
|
|
|
|
#746 | ||
|
"Dave"
Sep 2005
UK
23·347 Posts |
Quote:
Quote:
|
||
|
|
|
|
|
#747 | |||
|
"Mike"
Aug 2002
823410 Posts |
Fish1 taught us how to multiquote!
Quote:
Quote:
Quote:
We are burning through exponents at a crazy pace. PrimeNet keeps giving us work like "foo,68,69" with an occasional "bar,69,70". How do we get work that takes more time? Something we did not count on is that these GPU boxes have rendered our four other quads boxes totally obsolete for TF work. We are not even sure they are worth the electricity to run, except maybe in the winter. Our poor math skills lead us to conclude that (overall) our two GPU boxes are three times faster than our four quad boxes. Or something like that. So far the boxes are running perfectly. CPU temperatures hover around 68-71°C and the GPU is stable at 65°C. (The ambient temperature is 78°F.) GPU usage is steady at 98% and CPU usage is pegged at 100%. We suspect the system would not be able to feed a GTX 580. We are not even sure if it can fully utilize a GTX 570. Running three instances might be faster but we have not tested it much yet. The one time we tried it the GPU usage meter did not peg out. Individual instances were faster but we do not know if the fourth instance offsets that. The new 650W power supplies are working great. We are certain that the old-but-new 500W power supplies were adequate but we hate returning stuff. We have a 140mm case fan on the top of the case, a 120mm case fan on the back of the case, a (lame) CPU fan, two surprisingly large fans on the GPU card and, finally, a 120mm fan on the power supply. (The GPU vents out back through three card slots.) The boxes are very loud but they exhaust only lukewarm air. We kind of like the white noise. We think we have the boxes set up like a thermal chimney. The PSU is at the bottom and all the heat should just rise, assisted by the fans. (Although the GPU card is like a wall because it is so big.) The total electrical consumption of both boxes running 100% is ~640W.
|
|||
|
|
|
|
|
#748 | |
|
A Sunny Moo
Aug 2007
USA (GMT-5)
3×2,083 Posts |
Quote:
*The 72 is chosen assuming your assignment is from the standard non-LMH TF range, i.e. somewhere in the 80M vicinity. For exponents of that size, Prime95 will aim to factor them to 71 bits+P-1 before considering them ready for LLing. By taking it all the way up to 72 with your GPU, you ensure that there's just the P-1 and LL left to be done, and you throw in an extra bit of TF for good measure. (Many have suggested around here that it would be prudent to do an extra bit of TF with GPUs because they are so much faster than CPUs.)
|
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| mfakto: an OpenCL program for Mersenne prefactoring | Bdot | GPU Computing | 1676 | 2021-06-30 21:23 |
| The P-1 factoring CUDA program | firejuggler | GPU Computing | 753 | 2020-12-12 18:07 |
| gr-mfaktc: a CUDA program for generalized repunits prefactoring | MrRepunit | GPU Computing | 32 | 2020-11-11 19:56 |
| mfaktc 0.21 - CUDA runtime wrong | keisentraut | Software | 2 | 2020-08-18 07:03 |
| World's second-dumbest CUDA program | fivemack | Programming | 112 | 2015-02-12 22:51 |