![]() |
1 Attachment(s)
[QUOTE=TheJudger;497720]Just uploaded a new set of binaries of mfaktc 0.21 for Windows using CUDA 10.0.[LIST][*][URL="https://mersenneforum.org/mfaktc/mfaktc-0.21/mfaktc-0.21.win.cuda100.zip"]mfaktc-0.21.win.cuda100.zip[/URL][*][URL="https://mersenneforum.org/mfaktc/mfaktc-0.21/mfaktc-0.21.win.cuda100.extra-versions.zip"]mfaktc-0.21.win.cuda100.extra-versions.zip[/URL][/LIST]If you're already running mfaktc 0.21 using an older CUDA version there is no need to upgrade, the source of mfaktc are unmodified.
These CUDA 10.0 binaries are compiled for compute_30 (Kepler), compute_35 (Kepler Update), compute_50 (Maxwell), compute_60 (Pascal), compute_70 (Volta) and compute_75 (Turing). No support for compute_20 (Fermi) or even older cards and only 64bit binaries - main purpose of these binaries are Volta and Turing GPUs. For the latter there are only 64bit drivers available so the decission was easy. Happy factor hunting! Oliver P.S. I had no access to Volta and Turing running Windows - if someone has such a GPU running on Windows please run the full selftest (e.g. mfaktc*.exe -st) for all 4 binaries and report results. Thank you![/QUOTE] I have results for binary: mfaktc-win-64.exe PASSED! |
HOWEVER....
2 Attachment(s)
When I ran an actual factor assignment …
First the Good News (I think) I was getting about 3,800 GD/Day But after about a minute of running my screen pixelated like this (mfaktc 1) Then the PC crashed with this STOPCODE (mfaktc 2) |
Hello,
is this behaviour repeatable? The pixelated screens [U]looks[/U] like too much OC or HW failure to me in first place but that is not the only option. Did you run some other workloads? Keep in mind that the selftest is a selftest for the software itself and not a stresstest for HW and it doesn't put much load on the GPU during selftest. Oliver |
[QUOTE=TheJudger;499042]Hello,
is this behaviour repeatable? The pixelated screens [U]looks[/U] like too much OC or HW failure to me in first place but that is not the only option. Did you run some other workloads? Keep in mind that the selftest is a selftest for the software itself and not a stresstest for HW and it doesn't put much load on the GPU during selftest. Oliver[/QUOTE] Repeated 3 times the same evening. After the second I tried to install the driver that came with the new monitor (grasping at straws). I haven't tried any OC of the GPU...just stock settings. I did notice that the GhzDays/Day varied quite a bit from about 2,500 to 3,800 over that minute before the crash. Should I try other exponent ranges or bit levels? Are there any config parameters that might be useful to try? Could it be a driver issue??? I just didn't know where to start....you (mfaktc issue) or NVIDIA (GPU issue) or my tech support (CPU/MB/RAM issue). In case it is relevant (though unlikely) it was built with 4x*GB RAM but Windows only sees 24GB. Thx |
Hi,
I recommend to test other software in this case. I know that mfaktc in non-selftest mode easily hits the powertarget on RTX 2080 Ti so likely similar on other turing cards. Maybe try something like Furmark to stress your GPU really hard. 24 out of 32 GiB looks like one memory module isn't detected, maybe tools like CPUz give a hint which one. I would do step by step 1. fix memory detection 2. run memtest and/or Prime95 torture test 3. put some load on GPU Just the usual "how to test my system". Oliver |
Launch GPU-Z or MSI Afterburner before you start mfaktc and watch the temperature and fan speed, it could be overheating. In Afterburner you can set a manual fan curve based on temperature, make sure fan speed is at 100% at around 80 C or lower.
[url]https://www.techpowerup.com/gpuz/[/url] [url]https://www.msi.com/page/afterburner[/url] |
[QUOTE=ATH;499066]Launch GPU-Z or MSI Afterburner before you start mfaktc and watch the temperature and fan speed, it could be overheating. In Afterburner you can set a manual fan curve based on temperature, make sure fan speed is at 100% at around 80 C or lower.
[URL]https://www.techpowerup.com/gpuz/[/URL] [URL]https://www.msi.com/page/afterburner[/URL][/QUOTE] Amen to that, plus GPU-Z can log to a file. It looks something like this: [CODE] Date , GPU Core Clock [MHz] , GPU Memory Clock [MHz] , GPU Load [%] , Memory Usage (Dedicated) [MB] , CPU Temperature [°C] , System Memory Used [MB] , 2018-10-29 19:35:36 , 448.9 , 478.8 , 0 , 141 , 72.0 , 3329 , 2018-10-29 19:35:38 , 499.3 , 532.6 , 0 , 139 , 70.0 , 3330 , 2018-10-29 19:35:41 , 499.3 , 532.6 , 5 , 161 , 64.0 , 3352 , 2018-10-29 19:35:43 , 499.3 , 532.6 , 2 , 163 , 71.0 , 3355 , 2018-10-29 19:35:46 , 499.3 , 532.6 , 3 , 161 , 72.0 , 3357 , 2018-10-29 19:35:48 , 499.3 , 532.6 , 1 , 157 , 72.0 , 3350 , 2018-10-29 19:35:51 , 510.0 , 544.0 , 3 , 159 , 68.0 , 3341 , 2018-10-29 19:35:53 , 510.0 , 544.0 , 5 , 157 , 70.0 , 3336 , 2018-10-29 19:35:56 , 510.0 , 544.0 , 3 , 156 , 74.0 , 3337 , 2018-10-29 19:35:58 , 510.0 , 544.0 , 1 , 157 , 74.0 , 3338 , [/CODE](example above is from a nearly idle Intel Arrandale IGP) The significant variation in GhzD/day could indicate high temperature causing clock to throttle back; in old models it causes 50% reductions. Another good app is HWMonitor, from CPUID, which will indicate and log various parameters. And there's nvidia-smi, which also has logging capability. |
My tech support was kind enough to test GPU
1 Attachment(s)
Even though I didn't buy it from them he ran FurMark on their own test machine and got the same "artifacting" I was getting running mfaktc.
He said the card was faulty and I convinced NVIDIA support the same so they are going to replace it (oh well….tick tick) Also they got the RAM fixed...it now recognized all 32GB at 3600. So I ran a benchmark but was surprised that the timings it sent to Prime95 are about the same as my sons i7-6700 with stock RAM |
[url]https://www.theinquirer.net/inquirer/news/3065361/nvidias-geforce-rtx-2080-ti-cards-are-reportedly-failing-in-high-numbers[/url]
|
[QUOTE=petrw1;499076]Even though I didn't buy it from them he ran FurMark on their own test machine and got the same "artifacting" I was getting running mfaktc.
He said the card was faulty and I convinced NVIDIA support the same so they are going to replace it (oh well….tick tick)[/QUOTE] Thank you for your followup report! I have the feeling that some people feel ashamed (for no reason) when their hardware is faulty and don't report back. Oliver |
[QUOTE=TheJudger;499125]Thank you for your followup report! I have the feeling that some people feel ashamed (for no reason) when their hardware is faulty and don't report back.
Oliver[/QUOTE] And thanks much for the report and the various followups. I had been contemplating an RTX2080, which I see in Mark's posted link is also affected. Hopefully NVIDIA gets things straightened out soon. |
| All times are UTC. The time now is 23:05. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.