![]() |
The residue should definitely be a different non-zero hex string each iteration.
What's interesting is this same binary works fine with a different GPU in the system. I compiled against Cuda 6.5 |
[QUOTE=airsquirrels;424924]I have a system with a Titan Z, a 590, and a 690 in it.
Previously all three were running mfaktc on both GPUs without issue. I switched the Z GPUs over to LL and that has been very successful, however the 590 and 690 both return all 0x000000000000 interim residues and 0.0 error rates. No errors that I can see. Any idea what is causing this?[/QUOTE] Nothing is wrong, you just discovered a series of mersenne superprimes*. ---------------- * if a prime is the one that makes the last residue zero, then a superprime is the one which makes [U]all[/U] residues zero... |
[QUOTE=bgbeuning;425020]Me too. Every iteration says residue = 0.
This is my first time running CUDALucas so I did not know that was wrong. I compiled CUDALucas to get it to work, so I could have easily done something wrong. Maybe it could check for all residue 0 and quit saying something is broke.[/QUOTE] Are you the user who tried to manually submit two different "is prime!" results today? I knew they weren't right since the time between assignment and result was mere hours and there's no way it could have run a test in that time. At least it gave us a chance to test out the email feature when someone tries to submit a new prime using the manual forms. 3 times (one LL test was submitted twice, and a DC "is prime" submitted once). I know they're not right but I'm running my own tests just in the one in a billion million gazillion chance they happened to accidentally and coincidentally be prime, but I'm sure they won't be. They were done with: CUDALucas v2.05.1 If anyone can think of reasons why CUDALucas would report a prime result after only running for a little bit (in one case it was only a couple hours after the exponent, a 37M double-check, was assigned). Meanwhile, if you're doing a test and it magically reports that it's prime after an improbably short period of time, don't try to submit it to the server. Fix the software issue, run a real test, and then we'll talk. LOL |
I can read code.
What can i do ?:smile: |
[QUOTE=msft;425049]I can read code.
What can i do ?:smile:[/QUOTE] Any way to turn on additional debugging/logging ? Or a debug build? |
[QUOTE=airsquirrels;425050]Any way to turn on additional debugging/logging ? Or a debug build?[/QUOTE]
No,Read source... |
[QUOTE=msft;425051]No,Read source...[/QUOTE]
I have but I'm not sure where to add some debugging. cuFFT seems to work, timings for cufftbench and threadbench seem correct, kernel percentages seem correct, but -r residue tests all fail zero on both the 590 and 690 all zero. Titan Z succeeds. 690 is compute 3.0, 590 is 2.0,Z is 3.5. CUDA is 6.5, driver is 352.30 |
[QUOTE=airsquirrels;425052]CUDA is 6.5, driver is 352.30[/QUOTE]
Could you upgrade CUDA and driver? |
[QUOTE=msft;425053]Could you upgrade CUDA and driver?[/QUOTE]
I will try upgrading the driver first, cuda 6.5 is the highest that does not have an mfaktc bug. I tried the simpleCUFFT example unit test cuda sample, which succeeds on all cards. |
Updated driver to 352.79, same result. Correct -r residues for the Titan Z, incorrect for 690/590.
Installing CUDA 7.5 on the system now... |
Using CUDA 7.5 and the latest driver, recompiled CUDALucas (make clean, make) verified using 7.5 with ldd. Still same result, 0x000000000000 residue on 590 and 690
|
| All times are UTC. The time now is 23:01. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.