mersenneforum.org GTX 1180 Mars Volta consumer card specs leaked
 Register FAQ Search Today's Posts Mark Forums Read

 2018-05-10, 01:11 #1 tServo     "Marv" May 2009 near the Tannhäuser Gate 613 Posts GTX 1180 Mars Volta consumer card specs leaked Found in the TechPowerUp database was this intriguing entry for the new Volta ( or is it going to be called Turing ? ) : Cuda cores: 3584 base clock: 1405 boost: 1582 memory: 16GB of GDDR6 men clock: 1500 MHz effective: 12000 MHz bandwidth: 384GB/s power draw: 200 watts !!! Note the new memory type GDDR6 and the modest power draw. No mention of FP64, but don't expect more than the usual 1/24 or 1/32 FP32. Of course, this is just all just rumor ( but from a fairly reliabe source ) so take it FWIW.
2018-05-10, 04:03   #2
petrw1
1976 Toyota Corona years forever!

"Wayne"
Nov 2006

22·17·67 Posts

Quote:
 Originally Posted by tServo Found in the TechPowerUp database was this intriguing entry for the new Volta ( or is it going to be called Turing ? ) : Cuda cores: 3584 base clock: 1405 boost: 1582 memory: 16GB of GDDR6 men clock: 1500 MHz effective: 12000 MHz bandwidth: 384GB/s power draw: 200 watts !!! Note the new memory type GDDR6 and the modest power draw. No mention of FP64, but don't expect more than the usual 1/24 or 1/32 FP32. Of course, this is just all just rumor ( but from a fairly reliabe source ) so take it FWIW.
Other than the memory the specs do not look a whole lot different than the 1080TI

 2018-06-02, 01:25 #3 storm5510 Random Account     "Norman D. Powell" Aug 2009 Indiana, USA. 23×5×47 Posts I did a Google search on this. It seems to be a very large rumor-mill at the moment. One article seems to hint at something in July, others say August or beyond.
 2018-06-02, 07:11 #4 fivemack (loop (#_fork))     Feb 2006 Cambridge, England 18EF16 Posts There was supposed to be an nVidia presentation at the Hot Chips conference in mid-August (which will also be describing Cascade Lake and 'Fujitsu’s HPC processor for the Post-K computer') but it has been removed from the program at https://www.hotchips.org/program/ (it used to be in the 11:30 slot on day 1)
 2018-06-02, 10:53 #5 preda     "Mihai Preda" Apr 2015 53F16 Posts OK, I'm going to hold off buying myself a powerful Nvidia GPU until the new generation (11) is launched. My personal conspiracy theory is this: Nvidia realized they have the market mostly all for themselves (unfortunately), and they see that the 2y old GPUs sell for $700 and up, so why cannibalize themselves by pushing a new gen on the market, given that "how much more can they charge for one" is limited. So, in short, it's more profitable to keep selling the old series even if the new one is ready. Because, anyway, nobody else is in any rush to release anything (i.e. AMD).  2018-06-02, 13:53 #6 mackerel Feb 2016 UK 40310 Posts There's speculation either it is an error (they weren't going to talk about it at all) or this was prematurely released, even if it is only a title. https://www.anandtech.com/show/12847...-for-hot-chips 2018-06-13, 16:41 #7 tServo "Marv" May 2009 near the Tannhäuser Gate 613 Posts Quote:  Originally Posted by preda OK, I'm going to hold off buying myself a powerful Nvidia GPU until the new generation (11) is launched. My personal conspiracy theory is this: Nvidia realized they have the market mostly all for themselves (unfortunately), and they see that the 2y old GPUs sell for$700 and up, so why cannibalize themselves by pushing a new gen on the market, given that "how much more can they charge for one" is limited. So, in short, it's more profitable to keep selling the old series even if the new one is ready. Because, anyway, nobody else is in any rush to release anything (i.e. AMD).
Interesting to note that the GPU price gouging market has recently collapsed ( well, eased ) since crypto currencies are down 50% since January.

2018-06-14, 10:01   #8
M344587487

"Composite as Heck"
Oct 2017

769 Posts

Quote:
 Originally Posted by preda OK, I'm going to hold off buying myself a powerful Nvidia GPU until the new generation (11) is launched. My personal conspiracy theory is this: Nvidia realized they have the market mostly all for themselves (unfortunately), and they see that the 2y old GPUs sell for \$700 and up, so why cannibalize themselves by pushing a new gen on the market, given that "how much more can they charge for one" is limited. So, in short, it's more profitable to keep selling the old series even if the new one is ready. Because, anyway, nobody else is in any rush to release anything (i.e. AMD).
AMD being unable to compete on the same level is a definite reason for the slowdown. This Navi news doesn't bode well for the near-future prospect of proper competition either. I think AMD needs to go multi-die for a chance to compete, and for consumer hardware to go multi-die nvidia is going to have to go that route too. nvidia is in a position of power, they'll hold off on game-changing releases until AMD is on the verge of competing. That said, it looks like the 7nm GPUs in 2019 might be enough of a threat for nvidia to release an update this year.

Last fiddled with by M344587487 on 2018-06-14 at 10:02 Reason: logic

2018-06-14, 11:31   #9
preda

"Mihai Preda"
Apr 2015

17×79 Posts

Quote:
 Originally Posted by M344587487 AMD being unable to compete on the same level is a definite reason for the slowdown. This Navi news doesn't bode well for the near-future prospect of proper competition either. I think AMD needs to go multi-die for a chance to compete, and for consumer hardware to go multi-die nvidia is going to have to go that route too. nvidia is in a position of power, they'll hold off on game-changing releases until AMD is on the verge of competing. That said, it looks like the 7nm GPUs in 2019 might be enough of a threat for nvidia to release an update this year.
I'm an AMD fan, because they're open (open source). But I'm disappointed by their execution over the last 2 years. They need to get their s*t together fast (fast, like in 1y ago already). Let's hope they start focusing and executing.

Just an example: AMD's alternative to cuFFT used to be clFFT, an OpenCL implementation. AMD basically stopped all development on clFFT many years ago. Instead, the single AMD developer (!) that used to work on clFFT moved on to produce the ROCm FFT, rocFFT. The result is, AMD now has two sub-par FFT libraries. I can't usefully make use of any of the two.

Last fiddled with by preda on 2018-06-14 at 11:31

2018-06-14, 14:17   #10
M344587487

"Composite as Heck"
Oct 2017

11000000012 Posts

Quote:
 Originally Posted by preda I'm an AMD fan, because they're open (open source). But I'm disappointed by their execution over the last 2 years. They need to get their s*t together fast (fast, like in 1y ago already). Let's hope they start focusing and executing. Just an example: AMD's alternative to cuFFT used to be clFFT, an OpenCL implementation. AMD basically stopped all development on clFFT many years ago. Instead, the single AMD developer (!) that used to work on clFFT moved on to produce the ROCm FFT, rocFFT. The result is, AMD now has two sub-par FFT libraries. I can't usefully make use of any of the two.
I agree that they've been lacking on software support for the longest time. Their drivers have been doing pretty well of late, hopefully that'll translate into better library support at some point but who knows. Maybe they'll have the resources and will to do now that they seem to be doing well on some fronts.

 2018-06-21, 19:18 #11 tServo     "Marv" May 2009 near the Tannhäuser Gate 613 Posts Most of the rumors circulating now are focusing on late July, around the 30th for the release date for the GTX 1180, or whatever it will be called. The "late" release date is being blamed on 2 factors: (1) Nvidia has eschewed HBM2 memory for cost reasons and is going with GDDR6, a new type, which always means delays in ramping production. (2) They are stockpiling tons of the new boards so they can flood the distribution channel and satisfy any demand for it so the prices don't go bererk like they did for the GTX 1080 Ti. Nvidia has the best FFT implementation because they have put LOTS of work into it for a long time. It is NOT written in any HLL or even in PTX. It is written in the lowest level machine code for the architecture. There is code in there to detect which architecture it's running and make appropriate adjustments for that as well as what resources the board has ( # cores, etc ). This is from some people who have painstakingly reverse engineered the code in order to find out why it is so fast.

 Similar Threads Thread Thread Starter Forum Replies Last Post tServo GPU Computing 8 2017-12-10 16:04 tServo GPU Computing 2 2017-05-13 07:58 Oddball Hardware 3 2011-04-03 10:42 ixfd64 Science & Technology 14 2009-07-19 07:28 kwstone Soap Box 22 2005-07-25 12:07

All times are UTC. The time now is 13:07.

Mon Mar 8 13:07:51 UTC 2021 up 95 days, 9:19, 0 users, load averages: 2.97, 2.95, 2.59