![]() |
![]() |
#89 |
"Oliver"
Sep 2017
Porta Westfalica, DE
2×11×61 Posts |
![]()
They have had severe problems:
http://videocardz.com/newz/nvidia-is...y-out-of-stock |
![]() |
![]() |
![]() |
#90 |
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
7·733 Posts |
![]()
Following scan.co.uk I can see people complaining about extra charges or even due to lag of website charges up to 3x...
|
![]() |
![]() |
![]() |
#91 |
"Composite as Heck"
Oct 2017
3×311 Posts |
![]()
It's the now standard launch farce. Could be partly down to being a paper launch but the numbers aren't known, it just seems a bigger farce this time as it's the most anticipated card of the year. There has been an amusing push back against scalpers this time, namely people bidding single card auctions up to 5 figures then ghosting the seller.
|
![]() |
![]() |
![]() |
#92 |
Random Account
Aug 2009
Not U. + S.A.
2,539 Posts |
![]() |
![]() |
![]() |
![]() |
#93 |
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
7×733 Posts |
![]() |
![]() |
![]() |
![]() |
#94 | |
"Viliam Furík"
Jul 2018
Martin, Slovakia
13×61 Posts |
![]() Quote:
All my calculations are about correct, assuming the numbers on the PrimeGrid forum were test results, not factoring results. If somebody wants to double-check my numbers, here are used values: R7 -> 108M - 0,92 ms/iter 2080Ti -> 93M - 2,4 ms/iter 3080 was 15,6 hours on 21449434-digit number, so it's equivalent to 71,2M Mersenne exponent. |
|
![]() |
![]() |
![]() |
#95 | |
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
3·2,011 Posts |
![]() Quote:
Last fiddled with by henryzz on 2020-09-18 at 18:18 |
|
![]() |
![]() |
![]() |
#96 | |
Jul 2009
Germany
25·3·7 Posts |
![]() Quote:
I'm sorry but this my oppinion. MOD: We don't know what you're trying to say, and the language you chose didn't help. Last fiddled with by VBCurtis on 2020-09-18 at 21:03 |
|
![]() |
![]() |
![]() |
#97 |
"Marv"
May 2009
near the Tannhäuser Gate
22×3×67 Posts |
![]()
WRT the Ampere architecture, most of it seems directed towards the gamers and AI researchers. Still, I found 3 things in the latest release of the v11.0 CUDA toolkit that look VERY interesting:
(1) Asynchronous copy between global and shared memory. Optimizing memory operations is EVERYTHING in GPU coding and memory copying has been a part of almost every well written program. This has the potential to really pay off big time since shared memory access is SO much faster than global. (2) L2 cache management instructions. Again, paying careful attention to memory access patterns can be very rewarding. (3) I'm not so sure this will pay off but the warp matrix operations now support FP64. This needs to be approached cautiously since these Tensor core operations are meant for AI stuff which don't have the same accuracy requirements wrt rounding that we do. Since these features are in the toolkit and not necessarily the Ampere architecture itself, that means they can be retro-fitted to older video boards, perhaps even going back to Pascal? Last fiddled with by tServo on 2020-09-21 at 13:50 |
![]() |
![]() |
![]() |
#98 |
"Marv"
May 2009
near the Tannhäuser Gate
22×3×67 Posts |
![]()
Correction on my prior post.
I think Ampere is required for these nifty new features. |
![]() |
![]() |
![]() |
#99 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
24×461 Posts |
![]() |
![]() |
![]() |