mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU Computing (https://www.mersenneforum.org/forumdisplay.php?f=92)
-   -   mfaktc: a CUDA program for Mersenne prefactoring (https://www.mersenneforum.org/showthread.php?t=12827)

TheJudger 2013-02-13 18:42

[QUOTE=flashjh;329252]Where did you get a K20? How does it perform with mfaktc?[/QUOTE]

A bit faster than my GTX 680. For some (unknown) reason CC 3.5 is worse than CC 3.0 (comparing performance / (number of core * clock rate))...
A GTX580 is still the fastest GPU for mfaktc.
I did a quick test with CUDALucas, too, it seems to be a small margin faster than a stock GTX 580 when ECC is enabled on the K20.

Oliver

TheJudger 2013-02-24 17:51

[QUOTE=TheJudger;329142]P.S. my GTX 680 stopped working on sunday... :sad:[/QUOTE]

Well... I've received my replacement yesterday. Different "Vendor", again reference design... after two hours playing Diablo 3 the card stopped working. After one hour the problems started, the game was laggy somehow (low framerates for fractions of a second, than for fractions of a second full performance, than again low framerates). I've checked temperatures (below 70°C for the GPU) and power target (<75% while playing Diablo 3). After two hours blinking pixels, corrupt triangles/textures and black display for a couple of seconds (nvlddmkm reloaded).
So either I had bad luck (two defective GTX 680) or my system kills GTX 680s (but I can't imaging how and why only 680s).[LIST][*]Both GTX 680 failed in my main rig (Asus P8Z68-V/Gen3, i7 3770k), errors are reproducable in my secondary rig (Intel DX58SO, Xeon W3690).[*]GTX 275 and GTX 470 are running fine in both systems.[*]I've used the GTX 470 for ~one year in the P8Z68-V/Gen3 until I decided to upgrade to GTX 680.[*]The GTX 470 consumes more power than the GTX 680s, powersupply is a 665W singlerail.[/LIST]While assembling the system I take care of ESD.

Oliver

P.S. new .plan for mfaktc 0.21: The features planned for 0.21 are moved to 0.22. 0.21 features support for Wagstaff numbers.

Redarm 2013-02-24 18:46

please check if the pci-e connector is slightly black

kracker 2013-02-24 19:12

How is your power supply?

TheJudger 2013-02-24 21:31

[QUOTE=Redarm;330818]please check if the pci-e connector is slightly black[/QUOTE]

Perfect condition, I've allready checked this.

[QUOTE=kracker;330824]How is your power supply?[/QUOTE]

665W Singlerail in both systems (Supermicro PWS-665-PQ, 54A @12V), power consumption for the i7 system is below 300W. The W3690 with the GTX 470 consumes up to ~400W.

Oliver

Rodrigo 2013-03-05 06:24

GPU temps with mfaktc
 
I just installed an NVIDIA GeForce GT 630 in an HP dx-7500 Microtower (Core2 Duo E7600, Vista Business x86) and am doing some testing. Two things I've noticed so far:

1. Prime95 running on both cores doesn't seem to be affected by mfaktc running at all -- the LL per-iteration times remain unchanged. Is this expected behavior? I'd thought that it was necessary to "dedicate" a CPU core to mfaktc. (I'm getting 44+ GHz-days/day on the 640.)

2. However, according to the CPUID Hardware Monitor, the GPU's temperature with mfaktc running goes from a baseline of 40C to as high as 83C, with a steady level at 82C. Is this excessive, or normal? (The GPU fan is running at 74%. Other temperature sensor readings don't change all that much.)

Thanks for any insights or info.

Rodrigo

Batalov 2013-03-05 07:01

That's (sort of) normal (the 82-83 C temps).

However, you can lower your temp (and the fan noise) without lowering productivity too much by lowering the memory clock. Try it in steps of 100MHz (and wait and listen for a couple minutes; once you will cross some specific zone, you will hear fans spinning down; observe the mfaktc window at the same time); then go back up in steps of 10MHz. Your mileage may vary, though.

LaurV 2013-03-05 07:52

[QUOTE=Rodrigo;332030]
1. Prime95 running on both cores doesn't seem to be affected by mfaktc running at all -- the LL per-iteration times remain unchanged. Is this expected behavior? I'd thought that it was necessary to "dedicate" a CPU core to mfaktc. (I'm getting 44+ GHz-days/day on the 640.)
[/QUOTE]
Before mfaktc 0.20, yes, the CPU was used for sieving the factor candidates. With v0.20 and after, the GPU is used to seive, so the CPU is (almost) totally free to do other tasks (P95), so yes, that is normal behavior if you run mfaktc v0.20.

[QUOTE=Rodrigo;332030]
2. However, according to the CPUID Hardware Monitor, the GPU's temperature with mfaktc running goes from a baseline of 40C to as high as 83C, with a steady level at 82C. Is this excessive, or normal? (The GPU fan is running at 74%. Other temperature sensor readings don't change all that much.)[/QUOTE]
Around 80C is "normal" for that card, in the way that such temperature won't damage it. But if hotter, it takes more power and does less work. You may try playing with mfaktc ini file to get the occupancy down (like from 98-99% to 95-97%). Your computer (in case that is used as video card too) will become more responsive, cooler, less noisier, for a small 2-3% of the output sacrificed.
As Batalov said, your mileage may vary.

Rodrigo 2013-03-05 18:03

Thanks Batalov and LaurV for the information and useful suggestions.

I'll look into how to adjust the clock. As for tweaking the INI file, which values should one consider changing for these purposes?

Rodrigo

Batalov 2013-03-05 19:59

There are two aspects:

1. The application cannot change memory clock (or other clocks/freqs); this is not in the .ini. You can do that with system tools; and the system tools will abstract the access rights for you. You won't be able to do that unless your account has administrator rights, or if you cannot right-click on the tool (MSI Afterburner, GB don't-remember-name, EVGA Precision) and "Run as administrator". Run it and manipulate specifically "memory clock", not the shader etc clocks. It may be that it is the memory that gets hot - even though it seems to be used for register spills (most of the work happens in the registers). Well, I don't have a good explanation. Maybe the authors observed that and would comment?

2. From the application .ini parameters, you can control some behaviors, that will affect responsiveness. One parameter that many people change GPUSieveSize=16 to GPUSieveSize=8.

Rodrigo 2013-03-06 01:14

Very good, I'll experiment with different values of GPUSieveSize.

I'll go to the NVIDIA site and see what turns up with respect to over/underclocking.

Rodrigo


All times are UTC. The time now is 23:15.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.