mersenneforum.org mfaktc: a CUDA program for Mersenne prefactoring
 Register FAQ Search Today's Posts Mark Forums Read

 2020-02-07, 15:57 #3246 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 105B16 Posts RTX2080 Super mfaktc tune, likes 2047Mib GPUSieveSize or maybe more It would apparently benefit from a further increase in max GPUSieveSize. Code: RTX2080 Super mfaktc tuning; 2047-enabled CUDA 10 version M441000023 from 2^80 to 2^81 Starting from GPUSieveSize=2047, GPUSieveProcessSize=32, GPUSievePrimes=100000 2936.74 GPUSieveProcessSize=32 GPUSieveSize 2040 2936.37 GPUSieveProcessSize=24 GPUSieveSize 2040 2925.78 GPUSieveProcessSize=16 GPUSieveSize 2040 3021.20 * GPUSieveProcessSize=8 GPUSieveSize 2040 2996.12 GPUSieveProcessSize=16, GPUSievePrimes=100000 vary GPUSieveSize GPUSieveSize=2047 2936.67 * GPUSieveSize=1536 2914.86 GPUSieveSize=1024 2897.07 GPUSieveSize=512 2810.90 GPUSieveSize=256 2648.74 GPUSieveProcessSize=16, GPUSieveSize=2047 GPUSievePrimes=80000 2931.64 GPUSievePrimes=90000 2936.92 GPUSievePrimes=100000 2936.43 GPUSievePrimes=94000 2930.264 GPUSievePrimes=92000 2937.64 * 2 instances 1 1506.48 tuned 2 1475.14 tuned total 2981.62 2981.62 / 2937.64 = 1.015 ratio 2-instance/1-instance A third instance would probably help a little. nvidia-smi indicates 99% gpu load not 100%, indicating there's still some untapped capacity with tune plus two-instance operation, 98% with one. recheck later, unchanged tune 1 1497.12 2 1486.10 total 2983.22, still 99%; 97-102%TDP, up to 1875Mhz Last fiddled with by kriesel on 2020-02-07 at 15:58
2020-04-14, 16:09   #3247
storm5510
Random Account

Aug 2009
U.S.A.

23·3·5·11 Posts

Quote:
 Originally Posted by MrRepunit Hi all, just want to announce here in this thread that I finished the generalized repunits version of mfaktc, the long wished for generalization of my base 10 repunits mfaktc variant. It supports only positive bases (b>=2). I did not want to hijack this thread, so I created my own: https://www.mersenneforum.org/showthread.php?t=24901 Feel free to test it.
I have not been here is quite some time. Has there been any updates to gr-mfaktc? I seem to recall a throughput issue when compared to the CUDA10 version of mfaktc. It runs around above 1000 GHz-days/day. gr-mfaktc ran in the low 700's. This is on my hardware; GTX 1080 with Win 10 v1903.

Last fiddled with by storm5510 on 2020-04-14 at 16:10 Reason: Additional

 2020-04-15, 09:52 #3248 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 53·79 Posts I've been running into gpu stoppages lately on my little miner rig. It's not clear what's cause and what's effect, but I'm seeing the mfaktc instances stop working, mmff too, Windows TDR events, and the known-bad-factor occur. Sometimes the failures are when switching gpu-z from tab to tab. Code: Date Time | class Pct | time ETA | GHz-d/day Sieve Wait Apr 14 22:38 | 3273 70.8% | 43.799 3h24m | 757.24 92725 n.a.% M332233123 has a factor: 38814612911305349835664385407 ERROR: cudaGetLastError() returned 6: the launch timed out and was terminated batch wrapper reports mfaktc exited at Tue 04/14/2020 22:38:56.98
2020-04-25, 23:17   #3249
storm5510
Random Account

Aug 2009
U.S.A.

23·3·5·11 Posts

Quote:
 Originally Posted by kriesel I've been running into gpu stoppages lately on my little miner rig. It's not clear what's cause and what's effect, but I'm seeing the mfaktc instances stop working, mmff too, Windows TDR events, and the known-bad-factor occur. Sometimes the failures are when switching gpu-z from tab to tab. Code: Date Time | class Pct | time ETA | GHz-d/day Sieve Wait Apr 14 22:38 | 3273 70.8% | 43.799 3h24m | 757.24 92725 n.a.% M332233123 has a factor: 38814612911305349835664385407 ERROR: cudaGetLastError() returned 6: the launch timed out and was terminated batch wrapper reports mfaktc exited at Tue 04/14/2020 22:38:56.98
Are you over-clocking? What I see in the first paragraph would tend to indicate you are. If so, ease off the throttle a little.

2020-04-25, 23:37   #3250
kriesel

"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

53·79 Posts

Quote:
 Originally Posted by storm5510 Are you over-clocking? What I see in the first paragraph would tend to indicate you are. If so, ease off the throttle a little.
Nope; I never overclock. I'll underclock on occasion to get reliability. The results you responded about were stock clock throughout.

2020-04-25, 23:50   #3251
James Heinrich

"James Heinrich"
May 2004
ex-Northern Ontario

29×101 Posts

Quote:
 Originally Posted by kriesel Nope; I never overclock. I'll underclock on occasion to get reliability. The results you responded about were stock clock throughout.
Still, the point may be valid, I find that hardware (both GPUs and CPUs) tend to "wear out" and tolerate lower clocks/heat as the years go by, irrelevant of cleanliness & cooling. It would not hurt to try underclocking the questionable GPU and see if the problem goes away.

Last fiddled with by James Heinrich on 2020-04-26 at 16:48 Reason: forgot "not"

2020-04-26, 07:54   #3252
kriesel

"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

53×79 Posts

Quote:
 Originally Posted by James Heinrich Still, the point may be valid, I find that hardware (both GPUs and CPUs) tend to "wear out" and tolerate lower clocks/heat as the years go by, irrelevant of cleanliness & cooling. It would hurt to try underclocking the questionable GPU and see if the problem goes away.
Thanks. It was a new gpu on a new 1200W PSU. The wrong factor has not occurred in the past 10 days. No clock change. That gpu is still on the system but various extenders have been shuffled around. It might have been a failing PCIe extender pad.

2020-04-26, 15:35   #3253
storm5510
Random Account

Aug 2009
U.S.A.

101001010002 Posts

Quote:
 Originally Posted by kriesel Nope; I never overclock. I'll underclock on occasion to get reliability. The results you responded about were stock clock throughout.
I never did either. I did not want to risk damaging something.

Quote:
 Originally Posted by James Heinrich Still, the point may be valid, I find that hardware (both GPUs and CPUs) tend to "wear out" and tolerate lower clocks/heat as the years go by, irrelevant of cleanliness & cooling. It would hurt to try underclocking the questionable GPU and see if the problem goes away.
I've had my GTX 1080 for two years now. It does not perform the way it used to. What was once 1200+ GHz-days/day using some programs is now down to 950 to 1050. I believe James left out a word in what he wrote above. "It would not hurt to try..." I do not run mine at 100% capacity now and haven't for quite some time. I use MSI Afterburner to reduce it to 80% of capacity. Doing so has just a slight affect on throughput and reduces the operating temperature by 12°C, on average.

 2020-05-02, 21:25 #3254 lalera     Jul 2003 10010010002 Posts hi, i have a gtx 580 that is about 8 years old running nearly the same speed as new but at higher temperatures sometimes i had problems with drivers i use this card sometimes for trial factoring with mfaktc
 2020-05-06, 23:11 #3255 ZFR     Feb 2008 Bray, Ireland 47 Posts Sorry to derail the thread a bit, but quick question: does changing CheckpointDelay to a lower value like 5 or 10 add a lot of overhead? Will it affect performance? Thanks.
2020-05-09, 02:46   #3256

"Kieren"
Jul 2011
In My Own Galaxy!

9,923 Posts

Quote:
 Originally Posted by lalera hi, i have a gtx 580 that is about 8 years old running nearly the same speed as new but at higher temperatures sometimes i had problems with drivers i use this card sometimes for trial factoring with mfaktc
Thermal compound is probably shot. If you went after that you could also deep clean the cooler.

 Similar Threads Thread Thread Starter Forum Replies Last Post Bdot GPU Computing 1618 2020-06-24 00:11 firejuggler GPU Computing 748 2019-11-23 16:36 froderik GPU Computing 4 2016-10-30 15:29 fivemack Programming 112 2015-02-12 22:51 xilman Programming 1 2009-11-16 10:26

All times are UTC. The time now is 03:36.

Tue Aug 4 03:36:04 UTC 2020 up 17 days, 23:22, 0 users, load averages: 1.17, 1.38, 1.45