![]() |
|
|
#34 |
|
Oct 2011
Maryland
2·5·29 Posts |
You will get the reports from Jerry once he gets it running I am sure. But it really sounds like this is a gamer card, and not a compute card.
|
|
|
|
|
|
#35 | |||
|
"Jerry"
Nov 2011
Vancouver, WA
1,123 Posts |
Quote:
. Really, the tripple CUDA cores looked really exciting, but now with some real reviews and other info discussed here it looks like the 680 may not be much better than a 580 for this type of work.Quote:
Quote:
Edit: One has to wonder, if this card doesn't live up to the actual 680 name, what is nVidia thinking? I know I can't afford the Tesla and Quadro cards, and based on their performance, I wouldn't really want one anyway. If Kyle is on the right track, I can't imagine what we're going to do. On the other hand, nVidia and AMD can both see where the technology is going... CUDA and OpenCL have enough of a backing that they have to know why some of us are buying the cards. Seems to me they're only hurting themselves if they don't make products that live up to the hype. Last fiddled with by flashjh on 2012-03-23 at 03:23 |
|||
|
|
|
|
|
#36 | ||
|
Oct 2011
Maryland
12216 Posts |
Quote:
Quote:
AMD really does seem to be making a strong move towards the compute space, which makes this decision all the more curious. |
||
|
|
|
|
|
#37 |
|
Mar 2003
Melbourne
5·103 Posts |
At my level, I'm definitely concerned on the work/power consumption :)
I had to shift one of the PCs onto a different power circuit. Rough figures, if I replace all my 580s/560Tis with 680s, I'd be looking at 300W power saving - enough to run another one & partial CPU :) -- Craig |
|
|
|
|
|
#38 | |
|
Banned
"Luigi"
Aug 2002
Team Italia
5·7·139 Posts |
Quote:
I'm seriously considering a GTX 580 as soon as the price rebates... but first, I'm waiting for some tests. Luigi |
|
|
|
|
|
|
#39 | |
|
Mar 2010
41110 Posts |
Quote:
ATM, GTX 680 behaves like a sm_21 GPU (though it should be sm_31 ?) - it's great at gaming, but it's worse than sm_20 GPUs in FP, DP FP and INT ops. However, mostly FP and DP FP tests were done in reviews in compute sections, so yeah, we need our own tests to make final judgement. Would be a shame if it turns out to be a pure gaming GPU (which means NV turns greedy and encourages compute folks to use Teslas). |
|
|
|
|
|
|
#40 |
|
Oct 2011
7·97 Posts |
I think this is probably more due to the CUDA vs OpenCL differences. CUDA is better able to take advantage of resources than OpenCL. If you look at the ratings of cards on JamesH's site, the 6990 has 5.1 GFlops capability vs 1.6 on the 580, yet the 580 outperforms it by over 10% on TF.
|
|
|
|
|
|
#41 |
|
Mar 2010
6338 Posts |
Ofc NV's native programming interface is better than OpenCL.
However, even if CUDA apps on gtx 680 will be 10-20% faster, it's still a crappy result (in terms of performance per shader per Ghz). I like to measure peak theoretical performance in int ops rather than fp, since sometimes it's possible to "cheat" by multiplying the performance by 2 or 3, while peak theoretical performance with integers is always shaders * SD clock. Last fiddled with by Karl M Johnson on 2012-03-23 at 15:04 |
|
|
|
|
|
#42 |
|
Mar 2010
3×137 Posts |
If the chinese will keep their promise, I will get that gpu tomorrow!
I AM SO ANXIOUS! The mystery of it's performance is driving me nuts.
|
|
|
|
|
|
#43 |
|
"Vincent"
Apr 2010
Over the rainbow
292010 Posts |
don't except anything, this way you will always be pleasantly surprised.
If it hold its promise -as in 3 time the compute power of a 580- you won't be able to saturate it. you will need at least 2 of the new xeon. |
|
|
|
|
|
#44 |
|
Jul 2003
27·5 Posts |
hi,
mfaktc v0.18 does not run with gtx680 mfaktc v0.18 (64bit built) Compiletime options THREADS_PER_BLOCK 256 SIEVE_SIZE_LIMIT 32kiB SIEVE_SIZE 193154bits SIEVE_SPLIT 250 MORE_CLASSES enabled Runtime options SievePrimes 25000 SievePrimesAdjust 1 NumStreams 3 CPUStreams 3 GridSize 3 WorkFile worktodo.txt Checkpoints enabled CheckpointDelay 30s Stages enabled StopAfterFactor bitlevel PrintMode full AllowSleep no CUDA version info binary compiled for CUDA 4.10 CUDA runtime version 4.10 CUDA driver version 4.20 CUDA device info name GeForce GTX 680 compute capability 3.0 maximum threads per block 1024 number of mutliprocessors 8 (unknown number of shader cores) clock rate 705MHz Automatic parameters threads per grid 1048576 running a simple selftest... ERROR: cudaGetLastError() returned 8: invalid device function |
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Nvidia GTX 745 4GB ??? | petrw1 | GPU Computing | 3 | 2016-08-02 15:23 |
| Nvidia Pascal, a third of DP | firejuggler | GPU Computing | 12 | 2016-02-23 06:55 |
| Pitfall when upgrading to Windows 10 with Fermi vs Kepler/Maxwell | Brain | Hardware | 16 | 2015-11-26 10:24 |
| AMD + Nvidia | TheMawn | GPU Computing | 7 | 2013-07-01 14:08 |
| What can I do with my nvidia GPU? | Surge | Software | 4 | 2010-09-29 11:36 |