mersenneforum.org RTX3080
 Register FAQ Search Today's Posts Mark Forums Read

 2020-08-27, 21:02 #67 ewmayer ∂2ω=0     Sep 2002 República de California 9,791 Posts @jwnutter: See Xyzzy's post #54 and my reply at bottom of #55 ... the first of those has detailed pics of both ends of the adapter. The PCGamer article is a little misleading here, though: "A traditional 8-pin PCIe connector (or 6+2-pin) features three +12V pins and five ground pins, and is able to deliver 150 watts per connector to a graphics card." That's for an 8-pin *output*-side connector, such a cable for a regular top-end GPU has a single 8-pin plug into one of the PSU's 12V outputs, and splits that across two 8-pin output plugs, *each* of which is rated up to 150W, for a total of up to 300W, plus the 75W a full-width 16x PCIe slot can supply. The new nVidia connector basically operates in the reverse-splitting direction, combining the output of two 8-pin/12V PSU output plugs into a single massive-wattage-capable 12-pin plug into the GPU, roughly equivalent to four conventional 8-pin PCIe GPU-side plugs, thus ~600W, which would justify the min-850W-PSU recommendation, since the PSU would need some spare headroom to drive the other system components, and not all 850W-rated PSUs allow themselves to be run right at that rated limit 24/7 like the Corsair I use in my 4xR7 build does, though it needs some innovative power-cabling to get there. :)
2020-09-01, 00:40   #68
jwnutter

"Joe"
Oct 2019
United States

3·52 Posts

Quote:
 Originally Posted by ewmayer @jwnutter: See Xyzzy's post #54 and my reply at bottom of #55 ... the first of those has detailed pics of both ends of the adapter. The PCGamer article is a little misleading here, though: "A traditional 8-pin PCIe connector (or 6+2-pin) features three +12V pins and five ground pins, and is able to deliver 150 watts per connector to a graphics card." That's for an 8-pin *output*-side connector, such a cable for a regular top-end GPU has a single 8-pin plug into one of the PSU's 12V outputs, and splits that across two 8-pin output plugs, *each* of which is rated up to 150W, for a total of up to 300W, plus the 75W a full-width 16x PCIe slot can supply. The new nVidia connector basically operates in the reverse-splitting direction, combining the output of two 8-pin/12V PSU output plugs into a single massive-wattage-capable 12-pin plug into the GPU, roughly equivalent to four conventional 8-pin PCIe GPU-side plugs, thus ~600W, which would justify the min-850W-PSU recommendation, since the PSU would need some spare headroom to drive the other system components, and not all 850W-rated PSUs allow themselves to be run right at that rated limit 24/7 like the Corsair I use in my 4xR7 build does, though it needs some innovative power-cabling to get there. :)

Wow, that's all very interesting and neat stuff.

Last fiddled with by jwnutter on 2020-09-01 at 00:40

 2020-09-01, 19:42 #69 Viliam Furik   Jul 2018 Martin, Slovakia 5·37 Posts RTX 3000 is here! RTX 3070 is about 50% above the RTX 2080 Ti, 3080 is about +130%, and 3090 is absolute beast with about +170% computational power increase in FP32 operations. You can find more here.
 2020-09-01, 20:24 #70 Xyzzy     "Mike" Aug 2002 11110000100112 Posts
 2020-09-02, 10:38 #71 M344587487     "Composite as Heck" Oct 2017 10100010012 Posts The 3080 looks like the price/performance winner, the 3090 is awful value but nvidia are going to sell them in droves if Samsung can pump them out quickly enough. nvidia pre-release pricing tends to be an underestimate, historically FE is up to $100 more than the stated MSRP and initial 3rd party boards tend to be high end. It could be a while before competitively priced boards are available. There are no benchmarks and the "1.7x more better" quotes are meaningless in a vacuum, we're going to have to wait 2 weeks at least for a proper indicator of performance. The way the pricing shakes out it looks like nvidia expect competition up to the 3080 and no competition for the 3090, that's the tippy-top-super-optimistic-best anyone not deluded can expect from AMD so nvidia have positioned themselves well, assuming the announcements are not too unrealistic. The layout of the 3090 though dear god, they really don't want to stump up the extra pennies for an HBM2e solution do they.  2020-09-02, 12:34 #72 nomead "Sam Laur" Dec 2018 Turku, Finland 23·41 Posts It would be nice to have confirmation from a proper architectural white paper at some point, but it seems that there's double the FP32 and FP64 performance per SM (so the ratio stays at 1:32) but no change to INT32, as compared to Turing. So it's a nice leap in PRP/LL performance, relatively speaking, but still quite far away from something properly FP64 capable like the very expensive datacenter version A100, or a Radeon VII. Much milder increase in TF (mfaktc) performance expected. Of course, the second-hand market for Turing cards may be interesting in a while... 2020-09-02, 15:02 #73 jwnutter "Joe" Oct 2019 United States 7510 Posts Quote:  Originally Posted by Xyzzy https://arstechnica.com/gaming/2020/...99-on-sept-17/ Thanks for sharing. I thought this was interesting: Quote:  A new series of Nvidia-sponsored monitors will including mouse connections directly into the monitor for the sake of lag reduction. Sure does follow this logic: Quote:  ...blur the line for what kinds of computing is handled on the GPU side... Last fiddled with by jwnutter on 2020-09-02 at 15:03 2020-09-02, 16:14 #74 xx005fs "Eric" Jan 2018 USA 110011112 Posts Quote:  Originally Posted by M344587487 The 3080 looks like the price/performance winner, the 3090 is awful value but nvidia are going to sell them in droves if Samsung can pump them out quickly enough. nvidia pre-release pricing tends to be an underestimate, historically FE is up to$100 more than the stated MSRP and initial 3rd party boards tend to be high end. It could be a while before competitively priced boards are available. There are no benchmarks and the "1.7x more better" quotes are meaningless in a vacuum, we're going to have to wait 2 weeks at least for a proper indicator of performance. The way the pricing shakes out it looks like nvidia expect competition up to the 3080 and no competition for the 3090, that's the tippy-top-super-optimistic-best anyone not deluded can expect from AMD so nvidia have positioned themselves well, assuming the announcements are not too unrealistic. The layout of the 3090 though dear god, they really don't want to stump up the extra pennies for an HBM2e solution do they.
The 3080 looks promising for trial factoring, but 700$is still really expensive for a purely 80-class GPU even compared to last couple generations. Sadly though it doesn't seem to have a decent FP64 ratio but instead is still 1:32 (no official info on nvidia's website i don't think but that's according to techpowerups). It also seems that nvidia seems to market this generation's cuda core as double that per SM compared to last gen. Hopefully the FP32 performance nvidia is talking about isn't their reduced precision tensor FP32 thing. I've been seeing a lot of reviewers calling the 3090 "a proper titan class GPU" and I think that's just an utterly incorrect statement. It not only didn't use the GA100 die, but it also doesn't have all the HPC compute features like 1:2 FP64 ratio. I would much rather see a proper Titan V replacement than a Titan RTX replacement. But i guess it is pretty cheap for a titan compared to last generation... I'm certainly more excited for RDNA2 since traditionally AMD had been offering a 1:16 FP64 ratio, and hopefully cheaper too. Last fiddled with by xx005fs on 2020-09-02 at 16:17 2020-09-02, 16:19 #75 mackerel Feb 2016 UK 1100001012 Posts Quote:  Originally Posted by M344587487 The 3080 looks like the price/performance winner, the 3090 is awful value but nvidia are going to sell them in droves if Samsung can pump them out quickly enough. How much does VRAM contribute to the final price? There is quite a bit of it on there. Maybe not enough to explain all the price, but it will contribute. Quote:  nvidia pre-release pricing tends to be an underestimate, historically FE is up to$100 more than the stated MSRP and initial 3rd party boards tend to be high end. It could be a while before competitively priced boards are available.
The pricing for the nvidia cards are on their web site. I'm happy the UK pricing is better than expected, since there's a joke $1=£1, but we're thankfully getting a better rate than that here. Given all the AIB info flowing around I hope they wont be too far off. I need to decide what to go for. I'd like to try nvidia's new cooler, but I'd also like two HDMI ports which means I'd have to select an alternate design. Quote:  The layout of the 3090 though dear god, they really don't want to stump up the extra pennies for an HBM2e solution do they. A random thought, have nvidia ever used HBM on a consumer card? I don't think they have. Going further, have they used HBM at all? Quote:  Originally Posted by nomead Of course, the second-hand market for Turing cards may be interesting in a while... It is already happening. A sales group I use is alight with comments with sellers and buyers arguing intensely in the wake of the news. Turning pricing has dropped, but not far enough if Ampere delivers. Fortunately anything I have to sell is old enough that... well, it is hard to drop below practically worthless. Last fiddled with by mackerel on 2020-09-02 at 16:20 2020-09-03, 00:22 #76 M344587487 "Composite as Heck" Oct 2017 11×59 Posts Quote:  Originally Posted by xx005fs ... I've been seeing a lot of reviewers calling the 3090 "a proper titan class GPU" and I think that's just an utterly incorrect statement. It not only didn't use the GA100 die, but it also doesn't have all the HPC compute features like 1:2 FP64 ratio. I would much rather see a proper Titan V replacement than a Titan RTX replacement. But i guess it is pretty cheap for a titan compared to last generation... I'm certainly more excited for RDNA2 since traditionally AMD had been offering a 1:16 FP64 ratio, and hopefully cheaper too. RDNA2 is well-anticipated but I don't expect the earth. They need to roll >18 on a D20 to crit and that's just to get back in the game. Quote:  Originally Posted by mackerel How much does VRAM contribute to the final price? There is quite a bit of it on there. Maybe not enough to explain all the price, but it will contribute. AFAIK supposedly ~$10 per 1GB chip, you'd expect slightly more for GDDR6X but given nvidia's buying power and the falling price of RAM who knows. The cost of VRAM only really factors in when you get to the lower end cards.

Quote:
 Originally Posted by mackerel A random thought, have nvidia ever used HBM on a consumer card? I don't think they have. Going further, have they used HBM at all?
They have used HBM on pro and enterprise cards, Titan V if I remember correctly was the prosumer fleece-me edition at a mere $3000. 2020-09-03, 00:31 #77 xx005fs "Eric" Jan 2018 USA 32×23 Posts Quote:  Originally Posted by M344587487 RDNA2 is well-anticipated but I don't expect the earth. They need to roll >18 on a D20 to crit and that's just to get back in the game. It does make me wonder if AMD does actually have something up their sleeve because it will be totally unreasonable for Nvidia to price the newer generation so low. I think they made the 3070 500$ either because the consoles are trying to hit the 500-800$mark with 2070-2080 performances, or because AMD made a 2080ti-like performance card planned for a 400-500$ release price. The pricing also means that it is almost 100% certain that AMD won't have anything coming close to the 3090's performance.