![]() |
|
|
#100 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
24·3·163 Posts |
Quote:
Test any new installation of CUDALucas with -memtest thoroughly, repeat of a small known prime, and at least one doublecheck assignment. Do such tests by redirection of console output to a log file for later examination for errors. Roundoff error of 0.2 is not a problem. See some of the earlier entries in the CUDALucas bug and wish list at https://www.mersenneforum.org/showpo...24&postcount=3 |
|
|
|
|
|
|
#101 | |
|
Jan 2019
Florida
24310 Posts |
Quote:
|
|
|
|
|
|
|
#102 |
|
"Kieren"
Jul 2011
In My Own Galaxy!
2×3×1,693 Posts |
RTX 2060 $350 to $420
https://promotions.newegg.com/neemai...x-landing.aspx |
|
|
|
|
|
#103 | |
|
"/X\(‘-‘)/X\"
Jan 2013
https://pedan.tech/
24×199 Posts |
Quote:
Kind of sad how I'm spending just under 600 watts (4 cards) to match your card. |
|
|
|
|
|
|
#104 | |
|
"Kieren"
Jul 2011
In My Own Galaxy!
2×3×1,693 Posts |
Quote:
EDIT: Neither card drives a display. Last fiddled with by kladner on 2019-01-17 at 01:37 |
|
|
|
|
|
|
#105 |
|
"Sam Laur"
Dec 2018
Turku, Finland
1001111012 Posts |
The display card I have in my home machine started showing signs of dying in the weekend. A couple system BSODs while watching Youtube, and it appears that the old GT430 I had now has a dead fan. Not worth fixing anymore in my opinion, but I guess I'll keep it on the shelf for a few years in case I need a backup card for some other system. Of course, this gave me a good excuse to order an RTX 2060. Unfortunately it's a Windows system, and I don't think the precompiled binary supports GPUSieveSize above 128, but I'll post comparison benchmarks vs. the 2080, on identical parameters in mfaktc.ini, as soon as I'm able to. It's an old case though, with many hard disks and plenty of cable clutter, so the airflow and thermal performance might be a bit underwhelming. Keeping the GPU cool reduces the leakage inside the chip, which reduces power draw, which in turn keeps the GPU even cooler... of course up to a limit. It feels like when going over 60C the power draw really goes off a cliff. Still, I'm expecting about 65% of the performance of the 2080, for 50% of the price. Purely based on the number of CUDA cores.
So I did some thermal and power measurements on the 2080 at different fan speeds to see the effect in quantitative terms - although feelings are nice, it's no replacement for actual benchmark data. To be specific, the card used is an MSI Ventus RTX 2080, standard edition, not "OC". GPUSieveSize=1024 for better performance. And by the way, at 128 it produces less heat, but does less work, and the net effect is that the GHz-days/d per Watt is better at GPUSieveSize=1024, at any GPU clock frequency. The default fan speed seemed to stay under 40% even at maximum power. This didn't allow running over 1800 MHz without hitting the power limit of 240W. Specified TDP is 215W, but nvidia-smi lets you set the power limit slightly higher than that. Maybe some Windows-based overclocking utilities would allow even higher boost clock rates, maybe not. At constant 60% fan speed, the maximum was 1830 MHz. At that speed the fan noise is still bearable, as most of it is just white noise from the airflow and there is not much motor whine. At 70% fan speed the motor whine appears, and it's kind of fine at work, but I wouldn't like to have something like that at home. But the maximum frequency went up one notch to 1845 MHz. Note that in none of these cases is the GPU thermally throttling, it is only the hard power limit of 240W that it runs against. The data is attached as a PDF in case anyone is interested. |
|
|
|
|
|
#106 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
24·3·163 Posts |
Quote:
I suggest you try ~90000 for gpusiveprimes, which is near where I found an optimal for GTX1080Ti. Also, have you considered trying gpusievesize 2048 or even 4096? The 512 to 1024 increment gave about 1% rise in GhzD/day. There may be a bit more gain left. If it's actual trial factoring throughput that's being indicated. Last fiddled with by kriesel on 2019-01-28 at 16:31 |
|
|
|
|
|
|
#107 | |
|
"Sam Laur"
Dec 2018
Turku, Finland
31710 Posts |
Quote:
Ok so I'm running all the tests again with the same exponent and same settings (clock speed etc.) as before, just varying GPUSieveSize. The first one I tried was 2048, but no luck. This error came up when trying to run self tests. And it's not a matter of actually running out of memory; mfaktc.exe only uses 405 MiB at the 2047 setting. Code:
gpusieve.cu(1276) : CUDA Runtime API error 2: out of memory. Amazingly, 2047 also passes self tests. The result there is 3115, or a 0,2% improvement over 1536; really minuscule by that point. Then to GPUSievePrimes. Maybe my search for the optimum point was a bit coarse before. For these runs I went back to GPUSieveSize=1024 and started increasing the sieveprimes value by about 4000 per step. For me, the performance stayed about flat, with a barely perceptible decline at each step. But I didn't feel like going any further up than 110K (111158 adjusted, to be exact) because at that point performance was down 0,5%. Then stepping down, I saw a very slight improvement at 78K (79158 adjusted), but as it was just +0,2% it could as well be due to noise in the measurement. Under half a second for a 5-minute run. After that, with smaller values, performance started declining again. So for this card at least, adjusting GPUSievePrimes from the default value doesn't bring any benefits. But these things are highly dependent on the GPU architecture. Who knows, maybe there's a way to make even better use of the faster INT32 on Volta and Turing. And the fact that now FP and INT operations can run at the same time. |
|
|
|
|
|
|
#108 |
|
"Sam Laur"
Dec 2018
Turku, Finland
317 Posts |
Pretty close, the actual ratio seems to be 67%. Benchmarks at some frequencies again attached, options are the same for both cards (2080 on Linux and 2060 on Win 7) so it was necessary to use GPUSieveSize=128 for these tests. The 2060 can be clocked higher, but my card seems to hit some limit at 1920 MHz and won't go any higher without touching the overvolt settings, and I'm not really willing to do that in the long run. Besides it's already at the rated TDP at that point, so there's very little to gain anymore.
|
|
|
|
|
|
#109 |
|
"/X\(‘-‘)/X\"
Jan 2013
https://pedan.tech/
24·199 Posts |
|
|
|
|
|
|
#110 |
|
"Sam Laur"
Dec 2018
Turku, Finland
4758 Posts |
That is only valid if you're building a system just for that purpose, not upgrading some pre-existing one (like I did - GT430 out, RTX2060 in - on an old 6-core Phenom system from 2011...)
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Nvidia GTX 745 4GB ??? | petrw1 | GPU Computing | 3 | 2016-08-02 15:23 |
| Nvidia Pascal, a third of DP | firejuggler | GPU Computing | 12 | 2016-02-23 06:55 |
| AMD + Nvidia | TheMawn | GPU Computing | 7 | 2013-07-01 14:08 |
| Nvidia Kepler | Brain | GPU Computing | 149 | 2013-02-17 08:05 |
| What can I do with my nvidia GPU? | Surge | Software | 4 | 2010-09-29 11:36 |