mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware > GPU Computing

Reply
 
Thread Tools
Old 2013-11-15, 03:03   #1970
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

3×3,221 Posts
Default

Well, if it gives me negative credit numbers when I report the results*, I won't object too much



-----
* like instead getting 30GHzDays when I report a DC, I would get a -30GHzDays, which in hex would be (use windows calculator) FFFF FFFF FFFF FFE2, or 18446744073709551586 GHzDays... In fact I would be ok even with a 32 bit value... (4294967266 GHzDays)
LaurV is offline   Reply With Quote
Old 2013-11-15, 11:54   #1971
ET_
Banned
 
ET_'s Avatar
 
"Luigi"
Aug 2002
Team Italia

61·79 Posts
Default

Quote:
Originally Posted by LaurV View Post
Well, if it gives me negative credit numbers when I report the results*, I won't object too much



-----
* like instead getting 30GHzDays when I report a DC, I would get a -30GHzDays, which in hex would be (use windows calculator) FFFF FFFF FFFF FFE2, or 18446744073709551586 GHzDays... In fact I would be ok even with a 32 bit value... (4294967266 GHzDays)
2^32 - 6GB = -2Gb

Luigi
ET_ is offline   Reply With Quote
Old 2013-11-15, 14:29   #1972
James Heinrich
 
James Heinrich's Avatar
 
"James Heinrich"
May 2004
ex-Northern Ontario

23×149 Posts
Default

How much RAM does the card actually have? 2GB? 4GB?

Last fiddled with by James Heinrich on 2013-11-15 at 14:30
James Heinrich is offline   Reply With Quote
Old 2013-11-15, 14:37   #1973
owftheevil
 
owftheevil's Avatar
 
"Carl Darby"
Oct 2012
Spring Mountains, Nevada

32×5×7 Posts
Default

Quote:
Originally Posted by Antonio View Post
CUDALucas seems to think my GT640 has a hole where it's memory should be, it says I have (minus)2GiB totalGlobalmem!
I find this strangely disquieting for software which is dealing with large numbers
This should be reported correctly in 2.05.
owftheevil is offline   Reply With Quote
Old 2013-11-15, 22:52   #1974
Antonio
 
Antonio's Avatar
 
"Antonio Key"
Sep 2011
UK

32×59 Posts
Default

Quote:
Originally Posted by owftheevil View Post
This should be reported correctly in 2.05.
The -2GiB is reported by 2.05-Beta-x64, downloaded today.
The card has +2GiB of memory installed.
Antonio is offline   Reply With Quote
Old 2013-11-16, 04:51   #1975
kladner
 
kladner's Avatar
 
"Kieren"
Jul 2011
In My Own Galaxy!

2×3×1,693 Posts
Default

I am having a pretty rough time with 2.05 on the GTX 580.
'CUDALucas -cufftbench 1 8192 1' crashes on GTX 580, brings down graphic driver 327.23 (which restarts). 782 MHz core, 1600 VRAM

This is just the latest test of many. Occasionally, the test completes.

Rolling back the driver from 331.65 to 327.23 made no difference.

2.04-beta successfully completes
'CUDALucas -cufftbench 32768 3276800 32768' and has turned in good DCs at 830 MHz core, 1600 VRAM.

I haven't yet tried running a DC on 2.05-beta.

I have the card throttled back from where it normally runs mfaktc to stock: from 844 MHz to 782 MHz. The RAM is 400 MHz below stock.

Any suggestions would be appreciated.

EDIT: Tried running an exponent, 30651xxx on 2.05, 830 MHz core, 1600 VRAM. Started with 1728K, instead of stepping up to it from 1600K, as 2.04 did. Crashed a bit after the 40,000th iteration.

Code:
Iteration 10000 M( 30651671 )C, 0x6b79bd6d5adfb7de, n = 1728K, CUDALucas v2.05 Beta err = 0.05396 (0:26 real, 2.8857 ms/iter, ETA 24:33:42)
Iteration 20000 M( 30651671 )C, 0x53064732900985e9, n = 1728K, CUDALucas v2.05 Beta err = 0.06055 (0:26 real, 2.6133 ms/iter, ETA 22:14:09)
Iteration 30000 M( 30651671 )C, 0xe85abecfe0f40dce, n = 1728K, CUDALucas v2.05 Beta err = 0.05469 (0:26 real, 2.6123 ms/iter, ETA 22:13:12)
Iteration 40000 M( 30651671 )C, 0xa4208cf27dd73713, n = 1728K, CUDALucas v2.05 Beta err = 0.06250 (0:26 real, 2.6123 ms/iter, ETA 22:12:48)
CUDALucas.cu(310) : cudaSafeCall() Runtime API error 30: unknown error.

Last fiddled with by kladner on 2013-11-16 at 05:11 Reason: th
kladner is offline   Reply With Quote
Old 2013-11-16, 07:05   #1976
Antonio
 
Antonio's Avatar
 
"Antonio Key"
Sep 2011
UK

21316 Posts
Default

Quote:
Originally Posted by kladner View Post
I am having a pretty rough time with 2.05 on the GTX 580.
'CUDALucas -cufftbench 1 8192 1' crashes on GTX 580, brings down graphic driver 327.23 (which restarts). 782 MHz core, 1600 VRAM

This is just the latest test of many. Occasionally, the test completes.

Rolling back the driver from 331.65 to 327.23 made no difference.

2.04-beta successfully completes
'CUDALucas -cufftbench 32768 3276800 32768' and has turned in good DCs at 830 MHz core, 1600 VRAM.

I haven't yet tried running a DC on 2.05-beta.

I have the card throttled back from where it normally runs mfaktc to stock: from 844 MHz to 782 MHz. The RAM is 400 MHz below stock.

Any suggestions would be appreciated.

EDIT: Tried running an exponent, 30651xxx on 2.05, 830 MHz core, 1600 VRAM. Started with 1728K, instead of stepping up to it from 1600K, as 2.04 did. Crashed a bit after the 40,000th iteration.
I tried your exponent on my GT 640, graphics driver 331.65, and it ran ok past your fail point (see below). So it looks like you may still have a hardware problem.

Code:
Iteration 10000 M( 30651671 )C, 0x6b79bd6d5adfb7de, n = 1728K, CUDALucas v2.05 Beta err = 0.05859 (2:21 real, 15.7012 ms/iter, ETA 133:38:30)
Iteration 20000 M( 30651671 )C, 0x53064732900985e9, n = 1728K, CUDALucas v2.05 Beta err = 0.05664 (2:37 real, 15.6991 ms/iter, ETA 133:34:50)
Iteration 30000 M( 30651671 )C, 0xe85abecfe0f40dce, n = 1728K, CUDALucas v2.05 Beta err = 0.05469 (2:37 real, 15.6950 ms/iter, ETA 133:30:06)
Iteration 40000 M( 30651671 )C, 0xa4208cf27dd73713, n = 1728K, CUDALucas v2.05 Beta err = 0.05640 (2:37 real, 15.6912 ms/iter, ETA 133:25:34)
Iteration 50000 M( 30651671 )C, 0x056716c2203b5c29, n = 1728K, CUDALucas v2.05 Beta err = 0.05859 (2:37 real, 15.6935 ms/iter, ETA 133:24:08)
Iteration 60000 M( 30651671 )C, 0x5da5d75c80f2587c, n = 1728K, CUDALucas v2.05 Beta err = 0.05469 (2:37 real, 15.6927 ms/iter, ETA 133:21:05)
( I know it's slow - but that DDR3 memory is incredibly reliable )
Antonio is offline   Reply With Quote
Old 2013-11-16, 07:47   #1977
kladner
 
kladner's Avatar
 
"Kieren"
Jul 2011
In My Own Galaxy!

2×3×1,693 Posts
Default

Of course, it could always be hardware, but I did subsequently run the exponent up to ~750K it. with more aggressive core clock (830 MHz). As mentioned previously, the VRAM at 1600 has performed well in the past with 2.04-beta. I have the libraries up through 5.5.

Still just tossing things out there.
kladner is offline   Reply With Quote
Old 2013-11-16, 08:20   #1978
Antonio
 
Antonio's Avatar
 
"Antonio Key"
Sep 2011
UK

10000100112 Posts
Default

Quote:
Originally Posted by kladner View Post
Of course, it could always be hardware, but I did subsequently run the exponent up to ~750K it. with more aggressive core clock (830 MHz). As mentioned previously, the VRAM at 1600 has performed well in the past with 2.04-beta. I have the libraries up through 5.5.

Still just tossing things out there.

Just a thought: are you using this card to drive your display as well as run CUDALucas?
I've had problems in the past with (if my somewhat old and dimmed memory serves me correctly) very similar error reports, when I've tried using my display card for various CUDA work. Could it be a memory conflict when the screen is updated? (The GT 640 I'm using is not driving my display, I have a GTX 650 Ti for that).
Antonio is offline   Reply With Quote
Old 2013-11-16, 16:28   #1979
kladner
 
kladner's Avatar
 
"Kieren"
Jul 2011
In My Own Galaxy!

2·3·1,693 Posts
Default

Quote:
Originally Posted by Antonio View Post
Just a thought: are you using this card to drive your display as well as run CUDALucas?
I've had problems in the past with (if my somewhat old and dimmed memory serves me correctly) very similar error reports, when I've tried using my display card for various CUDA work. Could it be a memory conflict when the screen is updated? (The GT 640 I'm using is not driving my display, I have a GTX 650 Ti for that).
Good point! Yes, the 580 is driving the display. I'll have to have a look at that. I'll try switching the display to the GTX 570. For that matter, I'll have another shot at running CL 2.05-beta on the 570.

I'm currently back running the 331.65 driver, since the roll back didn't seem to make any difference.

Thanks for the suggestion! I needed a new lead to follow. I currently have both cards back to doing LL-TF, but I do like to figure out things which don't work as they should.

EDIT: Clarification of fuzzy thoughts from late night experiments:
Quote:
Originally Posted by kladner View Post
Of course, it could always be hardware, but I did subsequently run the exponent up to ~750K it. with more aggressive core clock (830 MHz). As mentioned previously, the VRAM at 1600 has performed well in the past with 2.04-beta.
The above refers to running 2.04-beta on the 580 card.

EDIT2: I also tried running with threads at 512 instead of the default 256, but it did not seem to make any difference.

Last fiddled with by kladner on 2013-11-16 at 17:18
kladner is offline   Reply With Quote
Old 2013-11-16, 16:58   #1980
flashjh
 
flashjh's Avatar
 
"Jerry"
Nov 2011
Vancouver, WA

1,123 Posts
Default

I just completed a DC on M57885161 with CUDALucas 2.05-Beta-x64. It completed without error and I even switched FFT sizes a few times. Since I have the full run of residues from the first time I ran it, I was able to check progress along the way.

The only issue I found so far was keyboard input. If Interactive=n is set to 1 in the .ini file then anytime I pressed a key the program would stop progress. GPU usage dropped to about 50% but ^c still stopped the run. I could restart with no problems. Anyone else seen this in Windows or Linux? Can some others test this to see if it's working or not in Windows and Linux?

I haven't run all the FFT benchmarks yet, I'll do that now. Anyone else having a problem with the amount of memory reported by CUDALucas?
flashjh is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Don't DC/LL them with CudaLucas LaurV Data 131 2017-05-02 18:41
CUDALucas / cuFFT Performance on CUDA 7 / 7.5 / 8 Brain GPU Computing 13 2016-02-19 15:53
CUDALucas: which binary to use? Karl M Johnson GPU Computing 15 2015-10-13 04:44
settings for cudaLucas fairsky GPU Computing 11 2013-11-03 02:08
Trying to run CUDALucas on Windows 8 CP Rodrigo GPU Computing 12 2012-03-07 23:20

All times are UTC. The time now is 07:18.


Fri Aug 6 07:18:58 UTC 2021 up 14 days, 1:47, 1 user, load averages: 2.94, 2.78, 2.72

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.