![]() |
Problem detecting GPU
Hello All,
I am attempting to set up Msieve to utilize my Nvidia Quadro 600's CUDA cores. Below are my machine's specs. OS: Windows 7 Professional 64 bit CPU: Intel Xeon E3 3.10 GHz RAM: 8G GPU: Nvidia Quatro 600 I have set up and downladed the precompiled binaries listed in this tutorial [URL="http://gilchrist.ca/jeff/factoring/nfs_beginners_guide.html"]http://gilchrist.ca/jeff/factoring/nfs_beginners_guide.html[/URL] First I downloaded the GGNFS Binaries for the core2 64 bit listed here [URL="http://gilchrist.ca/jeff/factoring/index.html"]http://gilchrist.ca/jeff/factoring/index.html[/URL] I already had Python installed on my machine, and found the necessary script, etc. Next I downloded the precompiled MSIEVE binaries from the URL listed above, I downloaded the version: 1.52dev SVN 883 (with MPIR 2.6.0/GMP-ECM 6.4.4 Following the tutorial I edited the python sctipt to attempt to utilize my CUDA cores on my Quadro 600, but every time I attempt to run the program I recieve the following message: Msieve Error: return value 4294967295. Is CUDA enabled? Terminating... When I disable CUDA in the python script, the program runs perfectly fine, allowing Msieve to run. I have reinstalled my drivers for my Quadro, and CUDA cores are enabled in the NVidia control panel. Does anyone have any idea what I might be missing? Thanks! |
Is there an msieve.log anywhere that can tell us more?
|
My current method of executing involves using the python script, when I try to generate a log, I am entering the following at the command line
..\factMsieve.py -l msieve example Is this correct? I am sorry for all of the quesions |
1 Attachment(s)
Here is the log file
Again, I apologize for my ignorance |
1 Attachment(s)
Here is what I am seeing at the CMD window. The log file i posted previously is from when the program successfully ran without attempting to use CUDA cores. When I try to use the CUDA cores, the log file is not generated and this is what I see at the CMD window.
|
When the binary sees a command-line arg it doesn't understand, it prints the complete list. The -g option is not in the list as a valid option, making me think you are using an Msieve binary that is not compiled to use a GPU. Even if that's not correct, '-g 4' implies you have 5 GPUs present, which is unlikely.
|
Ah ok, well I've tried multiple Msieve binaries from the link that I posted originally. I've tried the most recent, 1.52 and the one that was labeled CUDA, both gave me the same result. What would you reccomend?
Here are the exact versions from the website [URL="http://gilchrist.ca/jeff/factoring/index.html"]http://gilchrist.ca/jeff/factoring/index.html[/URL] 1.52dev SVN 883 (with MPIR 2.6.0/GMP-ECM 6.4.4) and 1.50 (compiled by Brian Gladman) |
Does the 32-bit CUDA binary from sourceforge at least run?
|
I'll try it and let you know, thanks again for all of the help!
|
It's running, I will post the log file when it finishes!
|
1 Attachment(s)
Alright, so I tested the msieve151_gpu bin and attached are my results in regards to CUDA. It worked perfectly fine, and completed when I wasn't attempting to use CUDA.
Note that in the attached text file, the first example I have GPU_NUM set to 1 in the python script and the GPU is not detected as CUDA enabled. In the second example, I have GPU_NUM set to 0 and the GPU is detected as CUDA enabled. Is there something that I am not changing correctly in order to use the CUDA Cores on my GPU? It looks as though it is finding the GPU, but now I guess I'm trying to figure out how to utilize the cores |
| All times are UTC. The time now is 04:52. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.