![]() |
We were able to order the GPU cards from Newegg before the Friday shipping cutoff, so we should receive them Monday.
Just in case, we ordered 2 × [URL="http://www.antec.com/Believe_it/product.php?id=MjQ0NQ"]650W[/URL] power supplies as well, but we did so later in the day and they will not be shipped until Monday, so we will not see them until Tuesday or Wednesday. (We live very close to one of Newegg's warehouses, which is in Memphis.) That said, some [URL="http://www.mersenneforum.org/showpost.php?p=258785&postcount=20"]measurements[/URL] of our systems indicate we might be okay. We guess the real question will be how much load there is, because (in our experience) the harder you run a PSU the shorter it lives. We feel comfortable with up to ¾ load. (Obviously, each rail and tap is evaluated independently.) |
1 Attachment(s)
We received both GTX 570 cards yesterday and both 650W power supplies today.
We have not installed the larger power supplies yet. Running one instance of the self test uses around 215W and 25-26% processor resources with an i5 2500. We did modify the ini file to set it from 3 to 10 NumStreams. We have no clue what that means. With 3 NumStreams the processor was nearly idle. We spent all night and today trying to get the development system running under Linux. We followed every HOWTO and tried every distribution recommended by Nvidia. We were unsuccesful but we will soldier on. To our embarrassment, we ended breaking out a 64-bit copy of Windows Vista. The install was painless. We thought we had sworn off of Windows forever. Compiling mfaktc in Linux went perfectly. We think the problem we are experiencing is that we cannot "talk" to the GPU. We have /dev populated and all of the environment variables set and all of the libraries in the right places and stuff but we got very weird errors. The GPU shows up with 'lspci' and the Nvidia module shows up with 'lsmod'. We were trying to do all of this with a non-graphical install using the onboard (Intel) graphics that we think (?) are built into the processor. We tried one graphical install of Ubuntu 10.04 but when we shut down GDM the monitor complained about the feed and went into a coma. The cards themselves are very impressive. They weigh a ton and are so big it took us over an hour to install each of them. There is literally less than a millimeter of clearance on the far end, and our cases are not small ones. The angle we used to install them looks impossible. So far they are much quieter than our case fans. They came packaged in a very classy black display box, almost like a jewelry box. They take up 3 card slots and dwarf our mATX motherboard. Anyways, it is good to know they work but it is distressing that we are having so many issues with the Linux install. In an ideal world, we would like to install with Debian but most likely we will try an older version (11.1) of OpenSUSE since that is what Oliver is using. We are fairly familiar with SUSE because we used to use the for-pay SUSE Enterprise Desktop deal. Attached is a copy of a self test. Perhaps it is useful. The Windows install is a clean install with no modifications other than the Nvidia stuff. Thanks! |
[QUOTE=Xyzzy;259039]Running one instance of the self test uses around 215W and 25-26% processor resources with an i5 2500[/QUOTE]For production use you'll need to run at least 3 instances of mfaktc to make full use of your nice new fast GPU.
|
Xyzzy,
I don't like this trend...time to get on Nvidia's support website and ask some questions.... |
[QUOTE]I don't like this trend...[/QUOTE]The trend of us not having a clue what we are doing, or the trend of us posting too much in this thread? Or both?
:cmd: [quote]time to get on Nvidia's support website and ask some questions...[/quote]There is so much to wade through there. But we are trying! :max: |
[QUOTE=Xyzzy;259039]We received both GTX 570 cards yesterday and both 650W power supplies today.
We have not installed the larger power supplies yet. Running one instance of the self test uses around 215W and 25-26% processor resources with an i5 2500. We did modify the ini file to set it from 3 to 10 NumStreams. We have no clue what that means. With 3 NumStreams the processor was nearly idle. We spent all night and today trying to get the development system running under Linux. We followed every HOWTO and tried every distribution recommended by Nvidia. We were unsuccesful but we will soldier on. To our embarrassment, we ended breaking out a 64-bit copy of Windows Vista. The install was painless. We thought we had sworn off of Windows forever. Compiling mfaktc in Linux went perfectly. We think the problem we are experiencing is that we cannot "talk" to the GPU. We have /dev populated and all of the environment variables set and all of the libraries in the right places and stuff but we got very weird errors. The GPU shows up with 'lspci' and the Nvidia module shows up with 'lsmod'. We were trying to do all of this with a non-graphical install using the onboard (Intel) graphics that we think (?) are built into the processor. We tried one graphical install of Ubuntu 10.04 but when we shut down GDM the monitor complained about the feed and went into a coma. The cards themselves are very impressive. They weigh a ton and are so big it took us over an hour to install each of them. There is literally less than a millimeter of clearance on the far end, and our cases are not small ones. The angle we used to install them looks impossible. So far they are much quieter than our case fans. They came packaged in a very classy black display box, almost like a jewelry box. They take up 3 card slots and dwarf our mATX motherboard. Anyways, it is good to know they work but it is distressing that we are having so many issues with the Linux install. In an ideal world, we would like to install with Debian but most likely we will try an older version (11.1) of OpenSUSE since that is what Oliver is using. We are fairly familiar with SUSE because we used to use the for-pay SUSE Enterprise Desktop deal. Attached is a copy of a self test. Perhaps it is useful. The Windows install is a clean install with no modifications other than the Nvidia stuff. Thanks![/QUOTE]Did you remove the nouveau driver before installing the nvidia-supplied one? FWIW, my system runs Fedora and CUDA without issues, but I did have to remove the nouveau driver first. Paul |
Considering the huge array of linux systems that their driver has to run on, Nvidia's installer does a rather remarkable job. It actually compiles their driver stub on the fly using your current kernel headers, then installs the result and tries to configure your X server (which I didn't think you had any hope of doing in an automated way). That said, I tried installing the thing on a late-model SuSe (11.4?) system, and although linux itself did see and run the card, I couldn't get CUDA programs to run. This after removing a ton of packages (including nouveau, which really didn't want to leave) and reinstalling a ton of others, over several days.
There are catherdal-and-bazaar-related lessons in here I'm sure :) |
[QUOTE=Xyzzy;259063]The trend of us not having a clue what we are doing, or the trend of us posting too much in this thread? Or both?
:cmd: There is so much to wade through there. But we are trying! :max:[/QUOTE] xyzzy: keep posting...the trend I was referring to was the complicated, confuzzling install...that, if I get similar hardware, I will also have to cope with. I think I'm taking the plunge this Friday...and purchasing a new system, with GTX570 , Sandy Bridge i7, and oodles of memory. And I hate the way things leave experts clueless, with needless complexity -- but that wasn't the thought here. |
[QUOTE=Xyzzy;259039]Running one instance of the self test uses around 215W and 25-26% processor resources with an i5 2500. We did modify the ini file to set it from 3 to 10 NumStreams. We have no clue what that means. With 3 NumStreams the processor was nearly idle.[/QUOTE]
The self-test doesn't use the CPU much because it "cheats" by only looking in the class where it knows the factor is to be found. You will probably need to reduce NumStreams for production work. [QUOTE=Xyzzy;259039]We were trying to do all of this with a non-graphical install using the onboard (Intel) graphics that we think (?) are built into the processor. We tried one graphical install of Ubuntu 10.04 but when we shut down GDM the monitor complained about the feed and went into a coma.[/QUOTE] I have had no problems installing CUDA and compiling and running mfaktc on Ubuntu 10.02 or 10.10. I just tried shutting down GDM and I could still compile and run mfaktc. In fact when I was producing the 32-bit Windows versions of mfaktc I was compiling it on a PC that didn't even have an NVIDIA graphics card installed. Thus your non-graphical setup ought to work OK. |
Fish1 taught us how to multiquote!
[QUOTE=amphoria;259115]The self-test doesn't use the CPU much because it "cheats" by only looking in the class where it knows the factor is to be found. You will probably need to reduce NumStreams for production work.[/QUOTE]After some testing with real factoring and watching the GPU load with GPU-Z, we have settled for the default "3". [QUOTE=xilman;259070]Did you remove the nouveau driver before installing the nvidia-supplied one?[/QUOTE]We were able to get rid of it by blacklisting it. That solves that issue! We still are a long way from a workable system, though. But, we will persevere! [QUOTE=James Heinrich;259053]For production use you'll need to run at least 3 instances of mfaktc to make full use of your nice new fast GPU.[/QUOTE]We are running four instances right now. The GUI is slightly laggy but we do not use it. (Hopefully we will soon have the Linux dealio up and we can forget about keyboards, mice and monitors!) We are burning through exponents at a crazy pace. PrimeNet keeps giving us work like "foo,68,69" with an occasional "bar,69,70". How do we get work that takes more time? Something we did not count on is that these GPU boxes have rendered our four other quads boxes totally obsolete for TF work. We are not even sure they are worth the electricity to run, except maybe in the winter. Our poor math skills lead us to conclude that (overall) our two GPU boxes are three times faster than our four quad boxes. Or something like that. So far the boxes are running perfectly. CPU temperatures hover around 68-71°C and the GPU is stable at 65°C. (The ambient temperature is 78°F.) GPU usage is steady at 98% and CPU usage is pegged at 100%. We suspect the system would not be able to feed a GTX 580. We are not even sure if it can fully utilize a GTX 570. Running three instances might be faster but we have not tested it much yet. The one time we tried it the GPU usage meter did not peg out. Individual instances were faster but we do not know if the fourth instance offsets that. The new 650W power supplies are working great. We are certain that the old-but-new 500W power supplies were adequate but we hate returning stuff. We have a 140mm case fan on the top of the case, a 120mm case fan on the back of the case, a (lame) CPU fan, two surprisingly large fans on the GPU card and, finally, a 120mm fan on the power supply. (The GPU vents out back through three card slots.) The boxes are very loud but they exhaust only lukewarm air. We kind of like the white noise. We think we have the boxes set up like a thermal chimney. The PSU is at the bottom and all the heat should just rise, assisted by the fans. (Although the GPU card is like a wall because it is so big.) The total electrical consumption of both boxes running 100% is ~640W. :mally: |
[QUOTE=Xyzzy;259157]We are burning through exponents at a crazy pace. PrimeNet keeps giving us work like "foo,68,69" with an occasional "bar,69,70". How do we get work that takes more time?[/QUOTE]
Try changing "foo,68,69" to something like "foo,68,72"*. (It doesn't care if you modify the stop point as long as it's doing at least one bit level; PrimeNet will be simply be pleasantly surprised to find that you have done more work than it asked for when you submit the results.) [size=1]*The 72 is chosen assuming your assignment is from the standard non-LMH TF range, i.e. somewhere in the 80M vicinity. For exponents of that size, Prime95 will aim to factor them to 71 bits+P-1 before considering them ready for LLing. By taking it all the way up to 72 with your GPU, you ensure that there's just the P-1 and LL left to be done, and you throw in an extra bit of TF for good measure. (Many have suggested around here that it would be prudent to do an extra bit of TF with GPUs because they are so much faster than CPUs.)[/size] :max: |
| All times are UTC. The time now is 23:08. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.