mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware > GPU Computing

Reply
 
Thread Tools
Old 2020-07-10, 19:09   #287
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

52×367 Posts
Default

Quote:
Originally Posted by ewmayer View Post
...but CA sales-gouge adds 8.5%, or $123.
I envy you... BB VAT is 15%. And our power is still 99% generated by burning long-dead plants and animals, so we pay a large amount for our electrons...
chalsall is online now   Reply With Quote
Old 2020-07-10, 19:25   #288
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

11,423 Posts
Default

Quote:
Originally Posted by chalsall View Post
I envy you... BB VAT is 15%. And our power is still 99% generated by burning long-dead plants and animals, so we pay a large amount for our electrons...
CA, despite its green-energy-pioneer chest-thumping, has some of the priciest power in the US - in fact one would be tempted to say "green costs more, that's why it's pricy", except that the state-granted private utility quasi-monopoly-corp, Pacific Gas & Electric, is also notorious for being one of the most corrupt in the US.

PG&E charges US$0.25/kWh baseline (monthly kWh allowance based on household type), $0.30 for each kWh above baseline. How do those compare to the ones you Barbadians enjoy?
ewmayer is online now   Reply With Quote
Old 2020-07-10, 19:53   #289
Uncwilly
6809 > 6502
 
Uncwilly's Avatar
 
"""""""""""""""""""
Aug 2003
101×103 Posts

8,423 Posts
Default

Quote:
Originally Posted by chalsall View Post
So, I'm currently paying ~$0.26 USD for a kWh of electricity.
The thread that the above quote comes from might interest you.
Uncwilly is online now   Reply With Quote
Old 2020-07-13, 00:17   #290
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

11,423 Posts
Default

Mod note: I split off the Antarctic-station-green-energy subdiscussion into its own thread in Science & Technology.

Last fiddled with by ewmayer on 2020-07-13 at 00:18
ewmayer is online now   Reply With Quote
Old 2020-07-13, 22:55   #291
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

11,423 Posts
Default

Just finished quick-test of both the 2 used R7s I overpaid for (relative to my previous quartet) in order to complete my 2-system/5-R7s buildout ... used the open-air testbench system with one more powered pcie 1x powered riser to plug each of the new-used acquisitions into. System recognized the 4th GPU in each case, so fired up my usual gpuowl jobs on the older 3 just to see if it was even remotely feasible watt-wise (850W gold psu, 3-jobs use 700-750W st my usual underclock settings) to run a 4th ... interestingly, the system is not stable even with the 4th GPU simply idling and the total watts right in the normal range ... runs for a few minutes then crashes, repeatably.

No big loss - plan was always to use 1 of the new pair as 2nd GPU in my haswell-atx-case system and resell the 2nd or keep it around as a spare. Just find the "3 is fine, but not 4, irrespective of watts" aspect interesting.
ewmayer is online now   Reply With Quote
Old 2020-07-15, 01:55   #292
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

19·223 Posts
Default

Quote:
Originally Posted by ewmayer View Post
interestingly, the system is not stable even with the 4th GPU simply idling and the total watts right in the normal range ... runs for a few minutes then crashes, repeatably.
Hmm, that's odd. I see systems aging and dropping from n to n-1 or n-2 gpus stable over time (years), but unstable at the same total power in the same week is new. There are gpus that don't get along with each other, and must be separated to different systems.
kriesel is offline   Reply With Quote
Old 2020-07-15, 03:09   #293
paulunderwood
 
paulunderwood's Avatar
 
Sep 2002
Database er0rr

47·71 Posts
Default

Quote:
Originally Posted by ewmayer View Post
Just finished quick-test of both the 2 used R7s I overpaid for (relative to my previous quartet) in order to complete my 2-system/5-R7s buildout ... used the open-air testbench system with one more powered pcie 1x powered riser to plug each of the new-used acquisitions into. System recognized the 4th GPU in each case, so fired up my usual gpuowl jobs on the older 3 just to see if it was even remotely feasible watt-wise (850W gold psu, 3-jobs use 700-750W st my usual underclock settings) to run a 4th ... interestingly, the system is not stable even with the 4th GPU simply idling and the total watts right in the normal range ... runs for a few minutes then crashes, repeatably.

No big loss - plan was always to use 1 of the new pair as 2nd GPU in my haswell-atx-case system and resell the 2nd or keep it around as a spare. Just find the "3 is fine, but not 4, irrespective of watts" aspect interesting.
I would remove one or two of the original 3 and try the latest ones for stability and then get a Corsair HX 1200

Mind, it is a big PSU dimensionally, but should have the connectors to run 5 R7s. And it is platinum rated.

Last fiddled with by paulunderwood on 2020-07-15 at 03:19
paulunderwood is offline   Reply With Quote
Old 2020-07-17, 20:54   #294
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

11,423 Posts
Default

Quote:
Originally Posted by paulunderwood View Post
I would remove one or two of the original 3 and try the latest ones for stability and then get a Corsair HX 1200

Mind, it is a big PSU dimensionally, but should have the connectors to run 5 R7s. And it is platinum rated.
Nah, I don't want to mess with the testframe build ATM, the mounting hardware there is such that the cards were tricky to get securely screwed down. Since the main thing is to determine whether it might be some issue with the GPU vs the OS, gonna do a "try before you buy" install of the card in the Haswell system which definitely is getting 1 more card, the "try" part refers to using the pcie-1x riser adapter and just setting the card atop the case - can flip on its side to get good airflow - to power it up and see if that system runs OK with the new card added to it. But first...

Cleaning the fan blades of an R7

In prep. for doing the above, I noticed that my eBay seller had been a bit lazy in not de-dusting the concave-undersides (as in, the ones pointing toward the finned heatsink in the R7, out of plain sight) of the 2 cards. Used my trusty EZ-refill high-volume dry-pressurized water-fire-extinguisher to first blast out what dust I could, but even that barely touches the dust on the concave fan-blade undersides. Next was the following method, which takes ~10 mins to properly do all 3 fans one ne gets the hang of it and works brilliantly. Set the card on your lap or tabletop, fan array pointing up. Notice that the fan blades look black when viewed from most angles, but are in fact translucent smoke-colored plastic: the black color in my case was in no small part due to the accumulated dust on the underside. Fill a plastic bottlecap with window cleaner and dip Q-tip ends in it to wet the cotton swabbing, then use 1 hand to hold the fan stationary and the other to swab the underside of each blade. Once you do 1 blade this way, if you carefully rotate the fan to position the just-cleaned blade above one of the shiny metallic center-cross-beams of the adjacent finned heat sink, you should be able to see through it just like a little smoke-colored window, revealing any crud you missed. Rotate the fan one blade a time to place each blade in this best viewing angle, and rotate the wet Q-tip a bit to get the still-clean parts to bear on each blade in turn. I found I could do 6 blades per Q-tip, 3 blades per cotton end. Easy-peasy, looks like new afterward.

Question - are the fans in an R7 such that one can simply pull them off the fan-motor shaft, clean them and pop them back into place? If so, that would of course greatly ease the cleaning, but I didn't want to try tugging at one without being sure that it is removable in such a manner.

And we are up and running! With 2 R7s running at sclk = 3 and mclk = 1150, the Haswell system pulls 500W-at-wall. ~150W more than with 1 R7. Will do proper install of card #2 into a full-width pcie slot and back-of-case-mounting-bracket attachment this weekend. So it would appear that the system-instability of the 3-GPU test-frame system upon adding this same R7 as a 4th card is not anything due to the card - since said instability manifested even at the same total-watts as for 3 cards, I suspect something related to OS or Mobo PCI-subsystem support. For now I can use the remaining R7 (2nd of the pair I just purchased on eBay) as a swap-in, allowing me to pull one of the older cards every month or so and give it a good de-dusting including the above fan-blade manicure.

Last fiddled with by ewmayer on 2020-07-17 at 20:57
ewmayer is online now   Reply With Quote
Old 2020-07-17, 22:44   #295
preda
 
preda's Avatar
 
"Mihai Preda"
Apr 2015

48216 Posts
Default

Quote:
Originally Posted by ewmayer View Post
Question - are the fans in an R7 such that one can simply pull them off the fan-motor shaft, clean them and pop them back into place? If so, that would of course greatly ease the cleaning, but I didn't want to try tugging at one without being sure that it is removable in such a manner.
Yes, the blades together with the central pin can be pulled-out by simply applying a huge amount of pulling force "straight out" on them. The force needed is so large that the blades risk breaking. Quite tricky to do IMO. Can be put back by reversing the pressure, much easier this way.
preda is offline   Reply With Quote
Old 2020-07-24, 21:17   #296
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

11,423 Posts
Default

Quote:
Originally Posted by ewmayer View Post
And we are up and running! With 2 R7s running at sclk = 3 and mclk = 1150, the Haswell system pulls 500W-at-wall. ~150W more than with 1 R7. Will do proper install of card #2 into a full-width pcie slot and back-of-case-mounting-bracket attachment this weekend. So it would appear that the system-instability of the 3-GPU test-frame system upon adding this same R7 as a 4th card is not anything due to the card - since said instability manifested even at the same total-watts as for 3 cards, I suspect something related to OS or Mobo PCI-subsystem support. For now I can use the remaining R7 (2nd of the pair I just purchased on eBay) as a swap-in, allowing me to pull one of the older cards every month or so and give it a good de-dusting including the above fan-blade manicure.
Proper install of card #2 in the Haswell case-system done and running nicely. Will post pics later, but card 2 is in the 2nd (and final for this mobo) 16x pcie slot which puts it a few inches above the PSU (right above the PSU vent-out fan), with ~2" gap between that and its bottom 3-fan intake array, and ~1.5" gap between its top plate and the intake fan array of card 1. Interestingly, card 2 getting intake air prewarmed by the PSU is not an issue, quite the opposite - these R7s run sufficiently hot that intake air at ~50C is still good for cooling.

Next issue - as noted previously, I bought 2 cards on ebay, originally with hope of putting one in the Haswell and one sitting in another homemade custom mounting bracket next to the 3-GPU Beast I recently built. But the latter system, for reasons unknown, will not run stably with a 4th card (not a PSU-wattage issues, the crash-after-a-few-minutes manifested repeatably and with cards clocked so as to draw no more total watts than in 3-card mode). So was gonna use the 2nd just-bought card for resale or as a spare, but then an evil thought occurred to me: Assume the Beast not being able to take 4 cards is some kind of PCI-slot support issue. Well, the Haswell has no more full-width PCI slots, but I first tried card #2 there using the 1X slot and a powered riser card on the GPU. Now that said card is properly housed in a full-width slot, the 1X slot is again free. So plug the intended 4th-GPU-for-the-Beast into the Haswell's 1X slot, but - so as to not overstress the Haswell's old PSU, and anyway that system has no more 2x8-pin power cables, I'm already splitting the sole such cable to power both GPUs housed in that system - plug the GPU and riser-card power cables into the Beast's much-more-capable PSU. Problem is, the card is not recognized on boot - the Haswell still shows just the 2 already-installed cards in /sys/class/drm, and the Beast (which I would not expect to see the card, since no PCI connection to it) shows just the 3-previous entries. Both systems running on their recognized cards just fine, and the Frankenstein-cable-hook-up intended new card is all lit up, but no place to go.

Last fiddled with by ewmayer on 2020-07-24 at 21:19
ewmayer is online now   Reply With Quote
Old 2020-08-02, 20:42   #297
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

11,423 Posts
Default

Traded some PMs with "the Radeon whisperer", our own PhilF, over last few days re. my inability to run a 4th R7 on my desktop open-frame build, the one powered by a Corsair RT850 power supply. I noted even throttling back the clock settings on the 4 cards so total watts-at-wall was about the same as with the current stable 3-card setup didn't work, system was either unstable (one particular R7 tried as external-mounted card #4) or wouldn't even see the 4th card (a different card, which works just fine as one of a 3-card setup). He suggests the issue may have to do with overloading the PSU's 12V power rail. Here a redacted version of our exchange:
PhilF:
Quote:
You can't assume you are staying within [the PSU]'s limits. The problem is that for each card you add you are adding X watts, but the vast majority of that wattage id being drawn from the +12V rail(s). It, or they, (some power supplies split the load between 2 separate +12V internal supplies), has its own wattage limit (as does all the voltage rails in the supply). Not only can they not add up to more than the supply's total wattage rating, but also each rail can't have it's own rating exceeded (the limits of each individual voltage should be listed on the supply). By the time you are up to 4 R7's (!) on the same supply you are really, really taxing the +12V supply rail. This could be the cause of all of the problems to begin with.
EWM:
Quote:
Would using power-splitter cables help mitigate the 12V-rail-overload issue? I have 2 of the 3 GPUs in the desktop build sharing such a pair of splitter cables, thus using just a single 8-pin plug of the PSU. The 3rd card has its own dedicated power plug to the PSU, as does the 4th card I've been trying to add on. In theory cards 3 and 4 could also share such a splitter, thus needing just a single 8-pin port of the PSU, same 8-pin load as the 3-card setup. In fact I've ordered another splitter for this purpose, but it's in covid-delayed transit from Asia.

What I do have on hand is a dual-6-pin-to-single-8-pin Y-adapter which came with the R7 - that would allow card 4 to use two 6-pin Peripheral&SATA plugs at the PSU end, instead of an 8-pin plug. would that also alleviate the 12V-overload issue, if that is the issue?
PhilF:
Quote:
Actually splitters, if anything, are bad.

First, you want to use all of the connectors the supply provides, which insures the load is being shared by both +12V rails (if it has 2 separate internal supplies). Also, keep in mind each connector the power goes through is another potential failure point. In fact, you should inspect each connector, from the power supply to splitters to cards, to make sure none of them have any discolored areas. Even just a small darkened area means that pin has been too hot and is causing problems.

In operation you should be able to grab those connectors, and if you feel any hot spots then you've located a problem. I sometimes go so far as to use a very small jeweler's screwdriver to slightly bend the female part of all the pins inside the connectors to make sure they are tightly grabbing their male counterparts.

The fewer connectors the power has to go through to get to its destination the better.

Now that I think about it, using the Y adapter that utilizes the 2 6-pin plugs from the power supply is probably a good idea. It is possible the first 12V output rail powers those 6-pin connectors, while the second 12V output rail powers the 8-pin connectors. If so, using that Y adapter is a must.
So was getting ready to plug one of the short 2x6-pin to [6+2]-pin Y-adapters that shipped with the R7 into the PSU, when I realized something I had overlooked - what I need is a Y-adapter with 2 *male* 6-pin plugs (PSU end) and a *female* [6+2]-pin plug [into which to plug one of the 2 male ends of the 8-pin power ribbon cable, but in fact the adapter has "the wrong sex" at each end. What I need is a pair of adapter cable with 6-pin male plugs at each end, but I've been unable to find such an item e.g. at Amazon. However, power extension cables with [6+2]-pin male plugs at each end are legion - could I simply plug the 6-pin solid block of one end into a 6-pin Peripheral&SATA power-out of the PSU, leaving the 2-pin bit dangling, and do similar at the other end to plug into one of the 6-pin female plugs of the Y-adapter?

In the meantime the aforementioned 2nd pair [6+2]-pin power splitter cables arrived and I tried hooking cards 3 and 4 up to those, i.e. 4 cards using just two of the PSU's 8-pin 12V power-outs - no joy, 4th card not seen on boot. But was easy to do and worth a shot.
ewmayer is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
AMD Radeon Pro WX 3200 ET_ GPU Computing 1 2019-07-04 11:02
Radeon Pro Vega II Duo (look at this monster) M344587487 GPU Computing 10 2019-06-18 14:00
What's the best project to run on a Radeon RX 480? jasong GPU Computing 0 2016-11-09 04:32
Radeon Pro Duo 0PolarBearsHere GPU Computing 0 2016-03-15 01:32
AMD Radeon R9 295X2 firejuggler GPU Computing 33 2014-09-03 21:42

All times are UTC. The time now is 00:27.

Thu Aug 13 00:27:25 UTC 2020 up 26 days, 20:14, 0 users, load averages: 1.07, 1.38, 1.43

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.