mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware > GPU Computing

Reply
 
Thread Tools
Old 2014-07-31, 17:36   #1
Rodrigo
 
Rodrigo's Avatar
 
Jun 2010
Pennsylvania

32·103 Posts
Default Run a GPU 24/7 ?

Quick question:

Do you run MFAKTx on your GPU all the time, or do you let it take a break every so often? I'm wondering about the "health effects" on the card of running it constantly at full tilt.

Anything there to be concerned about, or not really? What do you do?

Thanks in advance.

Rodrigo
Rodrigo is offline   Reply With Quote
Old 2014-07-31, 18:12   #2
Mark Rose
 
Mark Rose's Avatar
 
"/X\(‘-‘)/X\"
Jan 2013

2,917 Posts
Default

You can eventually wear out the fans after some time (years).

I run GPUs 24/7, except when I'm using the computer in question.

Giving them a break is not necessary. Keeping a steady temperature is probably better for them.
Mark Rose is offline   Reply With Quote
Old 2014-07-31, 18:12   #3
cardogab7341
 
Mar 2013
Dallas, TX

2·3·5 Posts
Default

I have 2 GTX-460s running 24/7 for the last 1.5 yrs. Temps are around 70 deg. C for 1 and 80 deg. C for the other. So far, all is good. If the temps start to rise a few degrees I'll shut the system down and blow all of the dust out of the heat pipes in the cards.
cardogab7341 is offline   Reply With Quote
Old 2014-07-31, 22:17   #4
TheMawn
 
TheMawn's Avatar
 
May 2013
East. Always East.

32778 Posts
Default

GPUs tend to be much hardier pieces of equipment than their CPU counterparts. Not to take away from CPUs though, because both are certainly well built these days.

I wouldn't worry about 24/7 operation but a good idea would be to replace the fans after a couple of years or if performance seems to be degrading for whatever reason. Keep them dust-free, etc. Make sure the temperatures are reasonable. They can take 100C though I would certainly stay away from that for any length of time. My watercooled GPU is in the mid 30's and my air-cooled one is in the mid 60's. Not too concerned about either.
TheMawn is offline   Reply With Quote
Old 2014-07-31, 23:13   #5
VictordeHolland
 
VictordeHolland's Avatar
 
"Victor de Hollander"
Aug 2011
the Netherlands

23×3×72 Posts
Default

I've had a ASUS HD7950 with a malfunctioning fan after 1.5 year of 24/7 running, so I send it for RMA. ASUS was so kind to swap it for a new card :D. Fans are going to be your biggest concern, next to cooling and your electricity bill :P.

Temperature and Fans

I try to keep my GPUs <75C, but that might be conservative?? Temperatures on my HD7950 and 280X fluctuate between 67C (night) and 73C (warm days). I've got a 140mm fan blowing cold air on the GPU and 2x120mm pushing warm air out on the top of one case. The other case has go an open side panel. The fans on the GPUs are running at 1700RPM, which for me is the sweet spot between temps and noise. My GPUs/CPUs are running most of the time 24/7, except when I'm racing (F1 2013) or playing Assassins Creed. Constant temperature is better than hot/cold cycles, which can cause miniature cracks in the solder.

Cooling VRMs
Cooling the VRMs (Voltage Regulator Modules) is also important to some extent, since on some cards can get much hotter than the GPU core (90C is not exceptional). They are usually rated for a maximum between 100-130C though, but I would advice keeping them at <90C just to be on the safe side. With most cards you can check the VRM temp with GPU-Z (sensor tap).

For instance the MSI GTX780 TwinFrozr 3 has a high VRM temp issue:
http://hardforum.com/showthread.php?t=1807147

Power phases
I would pick a card with a decent amount of power phases. The reference GTX570 PCB only had a 6 phase VRM design (4 GPU + 2 memory), which is terrible for a card pulling 200W! Custom PCBs usually have more and higher quality phase designs. For instance a ASUS GTX570 DC2 has 8 power phases (6+2). 2 extra power phases may not look like much of a difference, but 200W provided by 4 phases = 50VA per phase (rough calculation), compared to 33VA per phase with 6 phases. Compare it with a car cruising on the highway and one running full throttle all the time.

Some references:
http://www.overclock.net/t/929152/ha...-buy-some-570s

http://forums.evga.com/ASUS-GTX-570-...e-m960486.aspx (scoll down to post #6)

Quality PSU
I've had a crappy 800W PSU (don't remember the brand) which failed after 1.5 years of provided juice to a 2500k @4.0 and a single GTX480. Together they probably don't even draw 400W, but that constant power draw was too much for that B quality PSU. Now I am only using 80+ Gold rated Seasonic and Cooler Master PSUs.

Overclocking
I would advice against overclocking GPUs, it usually results in more power draw, higher temps, which stresses the GPU core and VRMs. Requires higher fan speeds to compensate, which results in more fan wear. Better buy more cards and stick them in multiple cases ;).
VictordeHolland is offline   Reply With Quote
Old 2014-08-01, 03:17   #6
retina
Undefined
 
retina's Avatar
 
"The unspeakable one"
Jun 2006
My evil lair

7·292 Posts
Default

Quote:
Originally Posted by VictordeHolland View Post
Overclocking
I would advice against overclocking GPUs, it usually results in more power draw, higher temps, ...
... and more computation/memory errors.
retina is online now   Reply With Quote
Old 2014-08-01, 03:53   #7
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

230216 Posts
Default

I have two of my 580's (the oldest of the "battery") running 24/7 from November 2011. Yes, there is no mistake, and I mean 24/7.

I water-cooled them (somewhere in the mid of 2012, the discussion is somewhere here on the forum)

The only "bad" things can happen are that the fans' "bearings" can get damaged over time, or the plastic propellers can get bent due to temperature. But... BUT! Almost all cards (especially the expensive one) have fans which are brush-less, bearing-less, (magnetic suspension), and if you clear the dust clogs from time to time, there will be no problem with the fans, even if you stay on "air cooled".

In fact, water cooling is more "sensitive" to 24/7 running, the first thing that crash is the pump (which is only guaranteed like 5000 hours from the manufacturer, or so), so if you switch to liquid, buy a good pump, and always have one for spare. A well-designed water cooling solution will be able to run your computer without a pump for a while, for "normal work" (not P95, neither cuda/games, but winword/excel/outlook/office stuff will still work, the water circulates at a lower pace, due to thermal convection).

Beside of the noises and the fact that you waste more money for the electricity, and you generate more heat, etc., there is no negative side of running 24/7.

Contrarily, there are lots of positive sides. One is the thermal expansion. Repeatedly starting and stopping your computer, especially in a cold room/climate, causes the components to cool and heat up in cycles, causing mechanical expansion/contraction. Like repeatedly bending a wire till is broken, when you don't have pliers. The little silicon balls and rods are exposed to thermal strain/stress every time your computer is heating and cooling, causing damages. Not to count the time needed to wait till they boot up In our offices for example, the computers are always ON too. Of course, they don't run P95, but stay in some "standby" state. But you got the point...
LaurV is offline   Reply With Quote
Old 2014-08-01, 04:52   #8
TheMawn
 
TheMawn's Avatar
 
May 2013
East. Always East.

172710 Posts
Default

Regarding the thermal expansion and contraction cycles:

The GPU never stops whereas the CPU does while running Prime95. As far as I know, there isn't a way to write the save files in a staggered manner, so each worker stops and waits for the slow clunky hard drive to write up a file for several megabytes per worker which is enough time for the CPU to cool if it is under a strong enough cooler.

For example, right now my GPU is at 38C (warm day) MAX 40 MIN 32, and this is over the course of well over a week, where I do occasionally stop my GPU to actually do things with it.

My second GPU is at 65 MAX 69 MIN 61 (EDIT: I want to stress the +/- 4C over the course of an entire week).

On the other hand, my CPU is at 66 MAX 70 MIN 34.


A good stability test is to stress the living sh*t out of the CPU just like Prime95 does. A good durability test would be to stop and start such stress tests multiple times per minute to get the temperatures to fluctuate from 30C to 100C repeatedly.

Last fiddled with by TheMawn on 2014-08-01 at 04:53
TheMawn is offline   Reply With Quote
Old 2014-08-01, 17:34   #9
Rodrigo
 
Rodrigo's Avatar
 
Jun 2010
Pennsylvania

32·103 Posts
Thumbs up

Wow, this yielded way more knowledge than I'd ever expected! Thank you all very much -- hopefully I won't be the only one who benefits from reading this thread.

Experience has led me, too, to keep the cases open on two of my PCs (one containing a GT630 and the other a GT430). At some point I noticed that the throughput (as measured by MFAKTC's running display) had gone way down, so I opened the cases and dusted off the GPUs with compressed air.

But each time, after closing the cases back up and restarting MFAKTC, the GHz-days/day soon dropped back dramatically, so I figured the problem was that there wasn't enough airflow inside, and decided to leave the cases open. That helped, but performance didn't return to its former levels until I also placed a small table fan to blow onto the GPU in the open case.

I'm bookmarking this thread.

Rodrigo
Rodrigo is offline   Reply With Quote
Old 2014-08-02, 12:36   #10
Chuck
 
Chuck's Avatar
 
May 2011
Orange Park, FL

36416 Posts
Default

Quote:
Originally Posted by Rodrigo View Post
But each time, after closing the cases back up and restarting MFAKTC, the GHz-days/day soon dropped back dramatically, so I figured the problem was that there wasn't enough airflow inside, and decided to leave the cases open. That helped, but performance didn't return to its former levels until I also placed a small table fan to blow onto the GPU in the open case.
My case has been open and a table fan running for the past year.
Chuck is offline   Reply With Quote
Old 2014-08-03, 11:03   #11
Robert_JD
 
Robert_JD's Avatar
 
Sep 2010
So Cal

2·52 Posts
Smile

Quote:
Originally Posted by VictordeHolland View Post
Quality PSU
I've had a crappy 800W PSU (don't remember the brand) which failed after 1.5 years of provided juice to a 2500k @4.0 and a single GTX480. Together they probably don't even draw 400W, but that constant power draw was too much for that B quality PSU. Now I am only using 80+ Gold rated Seasonic and Cooler Master PSUs.
In my experience, this point cannot be minimized. It is absolutely PARAMOUNT to have a Gold Star rated, PSU. Both PSU's Seasonic & Cooler Master I have been using for over a year now. They are rated at 1250 & 1200 watts respectively. Any other PSU that has a non Gold star rating, and/or is rated at less than 800 watts is potentially courting with disaster in my opinion.

Recently, I had to RMA an Asus Titan that lasted just under a year. Abruptly replacing the PSU with an el cheapo 750 watt Corsair ended up burning out the entire card, including the PSU. Luckely, the mobo/memory wasn't damaged

Ever since I received the replacement card, I've been using another Seasonic 80+ Gold rated, PSU and nothing else. Lesson LEARNED the hard way.
Robert_JD is offline   Reply With Quote
Reply

Thread Tools


All times are UTC. The time now is 14:24.

Thu Dec 3 14:24:34 UTC 2020 up 10:35, 0 users, load averages: 1.17, 1.26, 1.31

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.