![]() |
|
|
#2784 | |
|
Banned
"Luigi"
Aug 2002
Team Italia
10011000000012 Posts |
Quote:
|
|
|
|
|
|
|
#2785 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
782410 Posts |
Quote:
Windows or linux, version? |
|
|
|
|
|
|
#2786 |
|
Einyen
Dec 2003
Denmark
1101011111002 Posts |
Windows version of CUDALucas 2.06 but it happened for both CUDA 5.5, 8.0, 10.1 and both for a version I compiled myself and the "official" compiled version from the site.
It must be some very small error on the card even though it passes all benchmarks and RAM tests. I stopped trying to run CUDALucas on it, doing a bit of TF with mfaktc, and it does find factors and find all the factors in the extended mfaktc benchmark. The only real mem test for GPU I could find was GpuMemTest, but it only tests 4GB out of the 6GB on the card: http://www.programming4beginners.com/gpumemtest Others I tried were not real mem tests, but only stress tests of the GPU. I wish there was a gpu mem test you could run for 24+ hours and test all the RAM like the RAM tests for normal RAM. This one only takes ~1 min and have to be restarted again. |
|
|
|
|
|
#2787 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
24×3×163 Posts |
Quote:
From the 2.05.1 readme: Code:
-memtest s i s = # of chunks of memory
i = number of iterations
tests s 25MB chunks of memory doing i repetitions of
a 100,000 iteration loop on each of 5 different LL
test related sets of data. Each iteration consists
of copying a 25MB chunk of data, then re-reading
and comparing that copy to the original.
More details on that gpu at the GPU RIP thread https://www.mersenneforum.org/showth...561#post490561 Last fiddled with by kriesel on 2019-05-21 at 19:26 |
|
|
|
|
|
|
#2788 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2·112·47 Posts |
Quote:
|
|
|
|
|
|
|
#2789 | ||
|
Einyen
Dec 2003
Denmark
22×863 Posts |
Quote:
Quote:
|
||
|
|
|
|
|
#2790 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
24·3·163 Posts |
Quote:
Three other things that come to mind using up memory: ECC, any other gpu app running, and Windows display. What does NVIDIA-SMI.exe say about it? Sample: Code:
"C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe" Mon May 20 19:43:17 2019 +------------------------------------------------------+ | NVIDIA-SMI 353.30 Driver Version: 353.30 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla C2075 TCC | 0000:06:00.0 Off | 0 | | 60% 87C P0 0W / 225W | 175MiB / 5375MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 1 Quadro K4000 WDDM | 0000:42:00.0 Off | N/A | | 50% 85C P0 60W / 87W | 350MiB / 3072MiB | 99% Default | +-------------------------------+----------------------+----------------------+ Code:
CUDA reports 5316M of 5375M GPU memory free. Reducing size to 212 Initializing memory test using 5300MB of memory on device 0... On another gpu (no monitor attached to it), Code:
CUDA reports 10988M of 11264M GPU memory free. Initializing memory test using 10750MB of memory on device 0... Last fiddled with by kriesel on 2019-05-21 at 22:01 |
|
|
|
|
|
|
#2791 |
|
Einyen
Dec 2003
Denmark
22·863 Posts |
Code:
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 430.39 Driver Version: 430.39 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX TIT... WDDM | 00000000:02:00.0 On | N/A | | 59% 55C P0 97W / 250W | 185MiB / 6144MiB | 0% Default | +-------------------------------+----------------------+----------------------+ Which is close to 5 GiB = 5*1024 MiB = 5120 MiB, so I thought it might be max for CUDALucas, but I can see your test is using 10750MB... Anyway I'll keep testing those 5075MB I can test. Last fiddled with by ATH on 2019-05-21 at 22:42 |
|
|
|
|
|
#2792 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
24×3×163 Posts |
Quote:
|
|
|
|
|
|
|
#2793 |
|
Oct 2009
Ukraine
32 Posts |
Hello! Could you help me to find linux binary of last version CUDALucas? I can't see it. Thanks.
|
|
|
|
|
|
#2794 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
172208 Posts |
Quote:
|
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Don't DC/LL them with CudaLucas | LaurV | Data | 131 | 2017-05-02 18:41 |
| CUDALucas / cuFFT Performance on CUDA 7 / 7.5 / 8 | Brain | GPU Computing | 13 | 2016-02-19 15:53 |
| CUDALucas: which binary to use? | Karl M Johnson | GPU Computing | 15 | 2015-10-13 04:44 |
| settings for cudaLucas | fairsky | GPU Computing | 11 | 2013-11-03 02:08 |
| Trying to run CUDALucas on Windows 8 CP | Rodrigo | GPU Computing | 12 | 2012-03-07 23:20 |