mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU Computing (https://www.mersenneforum.org/forumdisplay.php?f=92)
-   -   Radeon VII @ newegg for 500 dollars US on 11-27 (https://www.mersenneforum.org/showthread.php?t=24979)

ewmayer 2020-04-25 02:33

[QUOTE=paulunderwood;543680]Please send us a piccy when you have mounted the graphics cards.[/QUOTE]
That's what I was hoping to have for you, but it's gonna need to wait for the RAM.

[QUOTE]Is it easy enough to slot in the DIMMs?[/QUOTE]
Should be easy enough, just standard dual-channel DIMM slots ... anyhow, heard back from the seller, seems there was a misunderstanding, auction listing said
[code]1. GA-AB350-Gaming 3 motherboard with original box
- AMD AM4 B350/rev.1.0
- 4 DDR4, 2-CH/PCI-Ex16/ATX
- USB 3.1/ GbE LAN / M.2 socket 3

2. AMD Ryzen 5 1500X processor with original box
- 3.5 GHz Base / 3.7 GHz Precision Boost
- I will include the stock cooler

3. MSI Core Frozr L 120mm TORX Fan Hydro-Dynamic Bearing CPU Cooler[/code]
That "4 DDR4" apparently meant that the mobo had 4 DDR4 *slots* ... seller said "If you look at the picture I have attached you will see no RAM attached to the mobo. What you believe is RAM is the side fan on the cooler. I can see how you might have thought that was RAM but it was because of the angle of the photograph." So what's the cheapest 2-DDR4-DIMM pair I can expect to get? Don't need more than 1 GB RAM, but I expect the smallest DDR4 DIMMs are probably higher-capacity than that.

Prime95 2020-04-25 03:55

Can you run on just one DIMM in your 2 systems until more memory arrives?

ewmayer 2020-04-25 06:27

[QUOTE=Prime95;543697]Can you run on just one DIMM in your 2 systems until more memory arrives?[/QUOTE]

Turns out even pulling RAM from the Haswell is not an option, since that uses DDR3, not DDR4. Just ordered [url=https://www.amazon.com/gp/product/B07LCP1JWS]this[/url], allegedly arriving Monday.

kriesel 2020-04-25 09:11

[QUOTE=ewmayer;543702]Turns out even pulling RAM from the Haswell is not an option, since that uses DDR3, not DDR4. Just ordered [URL="https://www.amazon.com/gp/product/B07LCP1JWS"]this[/URL], allegedly arriving Monday.[/QUOTE]There goes your budget. But just as "The bitterness of poor quality remains long after the sweetness of low price is forgotten." [URL]https://www.quotes.net/quote/64358[/URL], the sweetness of plenty of ram lingers long after the pain of more cost is forgotten.

I just went another round of trying to get a cheap odd brand 6-slot motherboard working, gave up in disgust, and ordered a replacement. The worthless eBay seller of the odd brand will never see another cent from me, after failing to deliver manual or drivers disc or useful response to missing items or apparently dead Mobo.

That was something to do while waiting for a replacement power supply for my 24-core box to show up Monday. (System found dead days ago. Luckily that system had a year warranty still in effect.)
Then I decided to tidy up my mini mining rig a bit and test there some extenders I had lying around. Somehow, near the end of that, it decided to become unbootable. Just now got it limping, with 3 gpus and cpu. I'm starting to believe the low dollar approach, resulting in 3 different gpu models from 5 different makerson one system is not a great idea. Two of the gpus refuse to coexist: gtx1080ti and one of the rtx1650s. And an RTX2080 and an RTX2080Super insisted on different driver versions, so they had to be separated to different systems. Next time will be same model, same brand, all as alike as practical. Homogeneity can make figuring out which gpu is misbehaving more of a challenge, but nominal interchangeability has its advantages.

A system can be made cheap, fast, reliable, soon. Choose one, maybe two if not too aggressive.

ewmayer 2020-04-25 18:46

Re. the waiting-for-DDR4-to-arrive delay, it's all good - got ahead myself in the excitement of seeing the system coming together tomorrow. Gonna need probably another 2 work sessions to get all the cabling right and also nice and tidy - don't want to end up staring at some Frankensteinian rat's nest of cables.

Re. the GPUs - all the same make/model, should I mount them all while waiting for the RAM, or start with 1 and make sure everything boots up and the GPU is recognized by the OS?

kriesel 2020-04-25 21:41

[QUOTE=ewmayer;543796]Re. the waiting-for-DDR4-to-arrive delay, it's all good - got ahead myself in the excitement of seeing the system coming together tomorrow. Gonna need probably another 2 work sessions to get all the cabling right and also nice and tidy - don't want to end up staring at some Frankensteinian rat's nest of cables.

Re. the GPUs - all the same make/model, should I mount them all while waiting for the RAM, or start with 1 and make sure everything boots up and the GPU is recognized by the OS?[/QUOTE]access for ram install is usually better before there are gpus and cables looming over the board. Build gradually is how I go. If your motherboard has integrated video, start with a base system and get that working. Then fight the gpu driver battle with the first gpu installed. Load the gpu app and run it and verify reliability. Then add a gpu and repeat and see if anything breaks. I've seen it be fussy about how many PCIe extender pads are connected to one power cable. The gradual approach may also reveal the point at which your circuit breakers go, or which gpu from which package is not up to spec or not compatible somehow.

ewmayer 2020-04-26 00:18

Update: CPU main power (2x12-pin ATX and 2x4-pin ATX_12V) and SSD power+sata3 cables hooked up, with minimal dangling-cableage from strategic use of zip ties. The CPU power cables alas run right over the 4 DIMM slots, so the pair of DIMMs I ordered will need to be carefully shoehorned in. There is a roughly SSD-sized rectangular free area at one corner of the Alu. chassis resulting from the on-its-side-mounted PSU only being 2/3 the width of the chassis, so I used a piece of double-sode-sticky tape to attach the SSD to the frame there.

Also trial-mounted one of the Radeon VIIs in the PCIe1 slot, no bracket issues as I had to hack my way around with the Radeon VII in my ATX-case Haswell system, here I simply slightly sideways-shifted the mobo, allowing the tongue end of the bracket to slip neatly into the resulting small gap between the side of the mobo and the Alu. frame support on that side.

However, I strongly suspect I will need an adapter for the 3rd Radeon VII - The centerlines of the PCIe1 and PCIe2 slots are 2.5" apart, which will leave ~1" space between the GPUs going into those - less than one would like, but the gap is for intake air, the hot-air vents on side of the GPU will end up pointing vertically upward (long side) and sideways (short side) in the setup, so that 1" between air intake of GPU1 and top plate of GPU2 should be OK. But the centerlines of the PCIe2 and PCIe3 slots are a mere 1.5" apart, which will leave effectively no air-intake space there. If there way a way to flip the orientation of the PCIe3 slot that would give me the needed 1" gap since the PCI connectors are offset on the GPU, but I fear that level of hackery is beyond me. Is there a PCIe adapter which accomplishes this pinout-reversal?

@Ken: yes, your start-sans-GPU-and-then-add-one-at-a-time idea makes sense.

kriesel 2020-04-26 07:40

[QUOTE=ewmayer;543827] Is there a PCIe adapter which accomplishes this pinout-reversal?

@Ken: yes, your start-sans-GPU-and-then-add-one-at-a-time idea makes sense.[/QUOTE]
Haven't seen such a reverser. A powered PCIe extender may be called for. I'm guessing you meant PCIe2 and PCIe3 there.

A practice assembly makes sense; I forgot to mention that.

Also, though you probably do it as a matter of course and will use linux, one or two means of remote access (firewalled properly) allowing full administration is a good part of base system prep. It's hard to do graphics driver install and troubleshoot when (a) the newly added graphics card is given video responsibilities by the BIOS, and some of them will disable the igp when a gpu is present, (b) the OS doesn't know what to do with the new hardware yet, so a physical monitor is dark, or the video rate set is making the monitor madly scroll unreadably, or whatever, and (c) sometimes there's a problem with the preferred remote access too, and when we need a backup remote access provision, we need it already installed and tested and running.

ewmayer 2020-04-26 20:21

[QUOTE=kriesel;543843]Haven't seen such a reverser. A powered PCIe extender may be called for. I'm guessing you meant PCIe2 and PCIe3 there.[/QUOTE]

Yes, the 1.5" spacing is between slots 2 and 3 - have edited my post to correct that.

Re. the extender, you mean [url=https://www.amazon.com/XRP-Express-Extender-Flexible-Extension/dp/B008BZBFTG]something like this[/url]? Why does it need to be powered? Wouldn't it simply transmit the power from the PCI slot to the target device without needing extra boosting?

While looking at various PCIe extenders, also came across [url=https://www.amazon.com/GODSHARK-PCI-Express-Extension-Protector-Adapter/dp/B07RWRK2L6/]this[/url], the purpose of which is unlcear to me - is it simply a way to raise the target device a little higher off the mobo, perhaps for internal-geometry reasons? Ah yes, the lead review says just that. Ooh, and now in a similar vein, [url=https://www.amazon.com/SNANSHI-PCI-Express-Riser-Degree-Adapter/dp/B01EUIG3AI/]such an adapter with a 90-degree angle bend[/url] - that might be just what I need for PCIe3-slot GPU, it would leave the air-intake side of GPU2 unoccluded and that of the GPU3 pointing downward, with ~2"-worth of free air-intake space between the input-fan array and the surface the rig sits on. The hot-air-out vent sides of GPU3 would point off to the side, away from the mobo.

kriesel 2020-04-26 21:47

[QUOTE=ewmayer;543883]
Re. the extender, you mean [URL="https://www.amazon.com/XRP-Express-Extender-Flexible-Extension/dp/B008BZBFTG"]something like this[/URL]? Why does it need to be powered? Wouldn't it simply transmit the power from the PCI slot to the target device without needing extra boosting?[/QUOTE]ribbon cable is very high gauge number/ low cross section wire. Miners and I typically use something like [URL]https://www.ebay.com/itm/USB-3-0-PCI-E-Riser-Mining-Card-VER-006C-16X-To-1X-Powered-Adapter-006C-6-Pack/192958473806[/URL] which gives a lot of independent control of position, orientation, and cooling access. It can seem a bit of a snakepit of cables though. Long ribbon cables would make that worse. And why do all those power cables have to be black and hard to see. [URL]https://gpu0.com/mining/6-gpu-mining-rig-build/[/URL] You might think that the 1x PCIe interface would limit performance, but I regularly beat Heinrich's gpu benchmarks with standard clocks this way in TF. I don't recall on PRP/LL/P-1.
[QUOTE]While looking at various PCIe extenders, also came across [URL="https://www.amazon.com/GODSHARK-PCI-Express-Extension-Protector-Adapter/dp/B07RWRK2L6/"]this[/URL], the purpose of which is unclear to me - is it simply a way to raise the target device a little higher off the mobo, perhaps for internal-geometry reasons? Ah yes, the lead review says just that. Ooh, and now in a similar vein, [URL="https://www.amazon.com/SNANSHI-PCI-Express-Riser-Degree-Adapter/dp/B01EUIG3AI/"]such an adapter with a 90-degree angle bend[/URL] - that might be just what I need for PCIe3-slot GPU, it would leave the air-intake side of GPU2 unocluded and that of the GPU3 pointing downward, with ~2"-worth of free air-intake space between the input-fan array and the surface the rig sits on. The hot-air-out vent sides of GPU3 would point off to the side, away from the mobo.[/QUOTE]Yes, sometimes motherboard components such as capacitors, or adjacent PCI slots in older systems, are too tall for "deep-chested" gpus like Radeon VIIs, interfering with full insertion. Gpu height can also be an issue. I've had to remove some components from HP Z600 tower workstation side covers to be able to close them on GTX10xx. Angle or short PC extenders may also be needed for hardware developers or serious repair shops so they can get access to actual signals with instrumentation.

ewmayer 2020-04-27 18:52

[QUOTE=kriesel;543843]Also, though you probably do it as a matter of course and will use linux, one or two means of remote access (firewalled properly) allowing full administration is a good part of base system prep. It's hard to do graphics driver install and troubleshoot when (a) the newly added graphics card is given video responsibilities by the BIOS, and some of them will disable the igp when a gpu is present, (b) the OS doesn't know what to do with the new hardware yet, so a physical monitor is dark, or the video rate set is making the monitor madly scroll unreadably, or whatever, and (c) sometimes there's a problem with the preferred remote access too, and when we need a backup remote access provision, we need it already installed and tested and running.[/QUOTE]

Am planning to have the system physically connected to monitor/keyboard/mouse for the initial Ubuntu install and apt-packages downloads, with no GPU installed. Was figuring the same openssh-server install to allow remote access (with WiFi stick configured and said access enabled) for subsequent management, though will leave system hooked up to monitor/keyboard/mouse at least for the first GPU setup. Do I need something beside that?

A related question re. gpuOwl's foreground-run mode: Say I'm remotely logged in and wish to start or check up on multiple gpuowl runs. How do I manage that? Is that that what the Linux screen utility is useful for?


All times are UTC. The time now is 12:59.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.