![]() |
|
|
#1 |
|
Nov 2015
1010 Posts |
Greetings,
First I would like to say I am new to the GIMPS project and to the GPU subthread so I apologize if this topic has been covered already. I downloaded this morning the CUDALucas 4.2 for CUDA 4.2 and installed the 4.2 toolkit from Nvidia's website. Unfortunately it appears the toolkit might be missing something or I got the wrong hardware for this project because I get the following error when I attempt to run the CUDALucas exe (see attached screenshot). I am running a GTX 970M in my laptop that I am trying to get this working on. If this was successful I was hoping to get this working on a spare GTX 980 I have currently sitting on the sidelines but have room for in my primary PC. I appreciate any troubleshooting assistance that can be provided. |
|
|
|
|
|
#2 |
|
"Kieren"
Jul 2011
In My Own Galaxy!
236568 Posts |
http://sourceforge.net/projects/cuda...s/CUDA%20Libs/
Download your choice of library packages, unzip, and place the appropriate (32 or 64 bit) versions of cudart*.dll and cufft*.dll in the folder with your CUDALucas executable. EDIT: The current version, of CUDALucas is 2.05.1. Get it here, which is also at Sourceforge. Also, I think you may need later versions of the libraries for the 900 series GPUs. Last fiddled with by kladner on 2015-11-11 at 19:57 |
|
|
|
|
|
#3 |
|
Nov 2015
2·5 Posts |
Greetings,
Thanks for your help! That did the trick. I updated also to the latest version 2.05.1, and used 6.5 libraries. I feel really embarrassed I missed those library files. I did have to move to the latest CUDA install which was 7.5. Definitely works a lot better on Win 10 with a GTX 970 than that 4.2 CUDA code I was running. Once again thanks for all the assistance. I did have another question. Will the CUDALucas build partial saves just like GIMPS? Only reason I ask is so that it can be started and stopped if I need to turn off the machine, etc, or needed the cycles for a more intensive process. |
|
|
|
|
|
#4 |
|
Nov 2015
2×5 Posts |
NVM, I got it figured out. The iteration save option was in the ini file, and I just had to turn it on.
Thanks for all the help, GPU is going strong. Working on getting the 980 in a spare rig I got and just use it for prime crunching. |
|
|
|
|
|
#5 |
|
"Kieren"
Jul 2011
In My Own Galaxy!
2·3·1,693 Posts |
Glad you got it sorted! More crunch power is always welcome!
Welcome to the project! Last fiddled with by kladner on 2015-11-12 at 17:22 |
|
|
|
|
|
#6 |
|
Nov 2015
2×5 Posts |
Thanks, I had previously helped out with Seti and Folding, but being a CE/EE those didn't appeal to me. Mersenne primes are definitely up my alley, even as a hobby, and when I am not using my computers for my own work and I can be crunching numbers for GIMPS. :)
I did have another configuration question for CUDALucas. Currently I have one stance running, and looking at the current utilization of the GPU, it isn't much. I have seen several posts about launching multiple instances of CUDALucas to maximize efficiency. Is there a way to have a single instance use multiple workers, like the bass GIMPS software, just by adding workers to the todo file or is the software does not have that capability yet? |
|
|
|
|
|
#7 | |
|
Romulan Interpreter
Jun 2011
Thailand
2×5×312 Posts |
Quote:
Are you getting anything (output) close to this list (scroll down for table) for your specific card? Can you post some time_per_iteration and exponent so we have an idea if you are using it right? Do you use a small (below 2000) value for screen/file output? (the parameter of the -c switch, in that case time is lost with screen/file output, use a value as 10000 for scree, or higher, and 200k or higher, usually 1M, for checkpoint file output). Other things: Can you make sure you have the last cudaLucas (2.05 or so). Do you use "polite" function (and which value?) for details check the .ini file inside the zip, and adapt it for your needs. The "multiple instances" does not apply, unless you either use a very old card, or a very old software version. New cudaLucas used adequately should "maximize" almost any (nVidia) card I know about, with ONE instance (one worker) on ONE exponent. The "multiple" thingies are done for every iteration (thousands of little guys inside of GPU struggling together in parallel to do that squaring/multiplication as fast as possible). That is where the speed comes from. Last fiddled with by LaurV on 2015-11-13 at 01:41 |
|
|
|
|
|
|
#8 |
|
"Kieren"
Jul 2011
In My Own Galaxy!
2×3×1,693 Posts |
He's running a 970M, and hopes to get a 980 going. No info, yet, on the systems running the GPUs.
|
|
|
|
|
|
#9 |
|
Nov 2015
2×5 Posts |
Greetings,
So I do have the latest version of CUDALucas, 2.05.1, for x64 architecture. I am using a 970 GTX at the moment, on an ASUS ROG MOBO and an I7 Haswell. Currently I am averaging 11.6 ms/iteration on the 970. I did do some testing with multiple instances and I saw more than a double in performance loss. Ended up having around 30 ms/iteration when running 2. I was doing this with 2 different exponents. Reason I was going this route was b/c in the normal GIMPS software, say the one I am running on the I7, I have 3 workers, each running an exponent. The graphics card obviously has thousands of little vector processors currently crunching numbers. Is there a way to have CUDALucas run more than 1 worker on the same card, or does each card its own worker? I believe it appears to be the later, per the information you posted. |
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Starting mprime at boot | daxmick | PrimeNet | 18 | 2019-03-17 01:12 |
| mfaktc not starting in Mac OSX | bayanne | GPU Computing | 0 | 2014-05-10 14:38 |
| Disk starting to go | Chuck | Hardware | 8 | 2013-05-20 06:40 |
| Starting new bases | MrOzzy | Conjectures 'R Us | 104 | 2010-03-18 22:11 |
| mprime starting | spaz | Software | 9 | 2009-05-03 06:41 |