![]() |
GTX 1180 Mars Volta consumer card specs leaked
Found in the TechPowerUp database was this intriguing entry for the new Volta ( or is it going to be called Turing ? ) :
Cuda cores: 3584 base clock: 1405 boost: 1582 memory: 16GB of GDDR6 men clock: 1500 MHz effective: 12000 MHz bandwidth: 384GB/s power draw: 200 watts !!! Note the new memory type GDDR6 and the modest power draw. No mention of FP64, but don't expect more than the usual 1/24 or 1/32 FP32. Of course, this is just all just rumor ( but from a fairly reliabe source ) so take it FWIW. |
[QUOTE=tServo;487316]Found in the TechPowerUp database was this intriguing entry for the new Volta ( or is it going to be called Turing ? ) :
Cuda cores: 3584 base clock: 1405 boost: 1582 memory: 16GB of GDDR6 men clock: 1500 MHz effective: 12000 MHz bandwidth: 384GB/s power draw: 200 watts !!! Note the new memory type GDDR6 and the modest power draw. No mention of FP64, but don't expect more than the usual 1/24 or 1/32 FP32. Of course, this is just all just rumor ( but from a fairly reliabe source ) so take it FWIW.[/QUOTE] Other than the memory the specs do not look a whole lot different than the 1080TI |
I did a Google search on this. It seems to be a very large rumor-mill at the moment. One article seems to hint at something in July, others say August or beyond.
|
There was supposed to be an nVidia presentation at the Hot Chips conference in mid-August (which will also be describing Cascade Lake and 'Fujitsu’s HPC processor for the Post-K computer') but it has been removed from the program at [url]https://www.hotchips.org/program/[/url] (it used to be in the 11:30 slot on day 1)
|
OK, I'm going to hold off buying myself a powerful Nvidia GPU until the new generation (11) is launched.
My personal conspiracy theory is this: Nvidia realized they have the market mostly all for themselves (unfortunately), and they see that the 2y old GPUs sell for $700 and up, so why cannibalize themselves by pushing a new gen on the market, given that "how much more can they charge for one" is limited. So, in short, it's more profitable to keep selling the old series even if the new one is ready. Because, anyway, nobody else is in any rush to release anything (i.e. AMD). |
There's speculation either it is an error (they weren't going to talk about it at all) or this was prematurely released, even if it is only a title.
[url]https://www.anandtech.com/show/12847/nvidias-nextgen-mainstream-gpu-talk-briefly-listed-for-hot-chips[/url] |
[QUOTE=preda;488951]OK, I'm going to hold off buying myself a powerful Nvidia GPU until the new generation (11) is launched.
My personal conspiracy theory is this: Nvidia realized they have the market mostly all for themselves (unfortunately), and they see that the 2y old GPUs sell for $700 and up, so why cannibalize themselves by pushing a new gen on the market, given that "how much more can they charge for one" is limited. So, in short, it's more profitable to keep selling the old series even if the new one is ready. Because, anyway, nobody else is in any rush to release anything (i.e. AMD).[/QUOTE] Interesting to note that the GPU price gouging market has recently collapsed ( well, eased ) since crypto currencies are down 50% since January. |
[QUOTE=preda;488951]OK, I'm going to hold off buying myself a powerful Nvidia GPU until the new generation (11) is launched.
My personal conspiracy theory is this: Nvidia realized they have the market mostly all for themselves (unfortunately), and they see that the 2y old GPUs sell for $700 and up, so why cannibalize themselves by pushing a new gen on the market, given that "how much more can they charge for one" is limited. So, in short, it's more profitable to keep selling the old series even if the new one is ready. Because, anyway, nobody else is in any rush to release anything (i.e. AMD).[/QUOTE] AMD being unable to compete on the same level is a definite reason for the slowdown. [url=https://www.pcgamesn.com/amd-navi-monolithic-gpu-design?tw=PCGN1]This Navi news[/url] doesn't bode well for the near-future prospect of proper competition either. I think AMD needs to go multi-die for a chance to compete, and for consumer hardware to go multi-die nvidia is going to have to go that route too. nvidia is in a position of power, they'll hold off on game-changing releases until AMD is on the verge of competing. That said, it looks like the 7nm GPUs in 2019 might be enough of a threat for nvidia to release an update this year. |
[QUOTE=M344587487;489804]AMD being unable to compete on the same level is a definite reason for the slowdown. [URL="https://www.pcgamesn.com/amd-navi-monolithic-gpu-design?tw=PCGN1"]This Navi news[/URL] doesn't bode well for the near-future prospect of proper competition either. I think AMD needs to go multi-die for a chance to compete, and for consumer hardware to go multi-die nvidia is going to have to go that route too. nvidia is in a position of power, they'll hold off on game-changing releases until AMD is on the verge of competing. That said, it looks like the 7nm GPUs in 2019 might be enough of a threat for nvidia to release an update this year.[/QUOTE]
I'm an AMD fan, because they're open (open source). But I'm disappointed by their execution over the last 2 years. They need to get their s*t together fast (fast, like in 1y ago already). Let's hope they start focusing and executing. Just an example: AMD's alternative to cuFFT used to be clFFT, an OpenCL implementation. AMD basically stopped all development on clFFT many years ago. Instead, the single AMD developer (!) that used to work on clFFT moved on to produce the ROCm FFT, rocFFT. The result is, AMD now has two sub-par FFT libraries. I can't usefully make use of any of the two. |
[QUOTE=preda;489810]I'm an AMD fan, because they're open (open source). But I'm disappointed by their execution over the last 2 years. They need to get their s*t together fast (fast, like in 1y ago already). Let's hope they start focusing and executing.
Just an example: AMD's alternative to cuFFT used to be clFFT, an OpenCL implementation. AMD basically stopped all development on clFFT many years ago. Instead, the single AMD developer (!) that used to work on clFFT moved on to produce the ROCm FFT, rocFFT. The result is, AMD now has two sub-par FFT libraries. I can't usefully make use of any of the two.[/QUOTE] I agree that they've been lacking on software support for the longest time. Their drivers have been doing pretty well of late, hopefully that'll translate into better library support at some point but who knows. Maybe they'll have the resources and will to do now that they seem to be doing well on some fronts. |
Most of the rumors circulating now are focusing on late July, around the 30th for the release date for the GTX 1180, or whatever it will be called.
The "late" release date is being blamed on 2 factors: (1) Nvidia has eschewed HBM2 memory for cost reasons and is going with GDDR6, a new type, which always means delays in ramping production. (2) They are stockpiling tons of the new boards so they can flood the distribution channel and satisfy any demand for it so the prices don't go bererk like they did for the GTX 1080 Ti. Nvidia has the best FFT implementation because they have put LOTS of work into it for a long time. It is NOT written in any HLL or even in PTX. It is written in the lowest level machine code for the architecture. There is code in there to detect which architecture it's running and make appropriate adjustments for that as well as what resources the board has ( # cores, etc ). This is from some people who have painstakingly reverse engineered the code in order to find out why it is so fast. |
| All times are UTC. The time now is 14:18. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.