mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware > GPU Computing

Reply
 
Thread Tools
Old 2014-07-18, 04:45   #45
Xyzzy
 
Xyzzy's Avatar
 
Aug 2002

2·32·13·37 Posts
Default

Quote:
Aha - Mike's version has upper-left-to-lower-right-slanting `, whereas mine had standard single-quote ' - with the former it works.
http://www.tldp.org/LDP/abs/html/commandsub.html

Xyzzy is offline   Reply With Quote
Old 2014-07-22, 03:19   #46
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

22×2,939 Posts
Default

OK, all 3 .run files mentioned above now have been run successfully, after reboot, 'nvcc -V' gives

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2013 NVIDIA Corporation
Built on Wed_Jul_17_18:36:13_PDT_2013
Cuda compilation tools, release 5.5, V5.5.0

Attempted nvcc compile of a couple basic sourcefiles indcates all is not well, however:

ewmayer@derek:~/Mlucas/SRC$ nvcc -c -O3 -DUSE_GPU util.cu gpu_iface.cu
In file included from imul_macro.h:29,
from mi64.h:30,
from util.h:32,
from util.cu:23:
imul_macro0.h:347:4: error: #error unknown compiler for AMD64.
util.cu:1080:5: error: #error unsupported compiler type for x87!

Those preprocessor errors are due to the expected compiler-macro __CUDACC__ not being defined. So I tried using the list-predefines method described in post #4 in this thread; that doesn't work for me, though:

ewmayer@derek:~/Mlucas/SRC$ strings nvcc | grep [-]D
strings: 'nvcc': No such file
ewmayer is offline   Reply With Quote
Old 2014-07-22, 15:51   #47
Mark Rose
 
Mark Rose's Avatar
 
"/X\(‘-‘)/X\"
Jan 2013
https://pedan.tech/

C7016 Posts
Default

Looks like the CUDA headers aren't in the compiler's default header path.

Try running nvcc with -L/usr/local/cuda-5.5/lib64
Mark Rose is offline   Reply With Quote
Old 2014-07-22, 23:59   #48
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

22×2,939 Posts
Default

Quote:
Originally Posted by Mark Rose View Post
Looks like the CUDA headers aren't in the compiler's default header path.

Try running nvcc with -L/usr/local/cuda-5.5/lib64
No joy - I tried putting the above path before and after the sourcefile names, and also tried just '.../lib32' on the thought that maybe nvcc was defaulting to 32-bit mode.

The lib32 and lib64 dirs are definitely there:
Code:
ewmayer@derek:~/Mlucas/SRC$ l /usr/local/cuda-5.5/lib64
libcublas_device.a   libcudart.so.5.5	  libcufftw.so		libcurand.so	       libnppc.so	  libnpps.so
libcublas.so	     libcudart.so.5.5.22  libcufftw.so.5.5	libcurand.so.5.5       libnppc.so.5.5	  libnpps.so.5.5
libcublas.so.5.5     libcudart_static.a   libcufftw.so.5.5.22	libcurand.so.5.5.22    libnppc.so.5.5.22  libnpps.so.5.5.22
libcublas.so.5.5.22  libcufft.so	  libcuinj64.so		libcusparse.so	       libnppi.so	  libnvToolsExt.so
libcudadevrt.a	     libcufft.so.5.5	  libcuinj64.so.5.5	libcusparse.so.5.5     libnppi.so.5.5	  libnvToolsExt.so.1
libcudart.so	     libcufft.so.5.5.22   libcuinj64.so.5.5.22	libcusparse.so.5.5.22  libnppi.so.5.5.22  libnvToolsExt.so.1.0.0
But these are all link-time libs ... you say we need a path-to-headers, shouldn't that appear via -I[path] in the compile command, and point to a dir full of .h files?

Last fiddled with by ewmayer on 2014-07-23 at 01:50 Reason: lib --> lib32
ewmayer is offline   Reply With Quote
Old 2014-07-23, 01:28   #49
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

22·2,939 Posts
Default

p.s.: Mike just e-mailed to ask if "nvidia-smi" returns anything yet.

Yes - see below. Also the GPU fan kicked in from that, is still going a few mins later - shouldn't that die down after a few seconds, since there is no process running?

Code:
ewmayer@derek:~/Mlucas/SRC$ nvidia-smi
Tue Jul 22 11:16:35 2014       
+------------------------------------------------------+                       
| NVIDIA-SMI 5.319.37   Driver Version: 319.37         |                       
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 430      Off  | 0000:01:00.0     N/A |                  N/A |
| 65%   32C  N/A     N/A /  N/A |        3MB /  1023MB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Compute processes:                                               GPU Memory |
|  GPU       PID  Process name                                     Usage      |
|=============================================================================|
|    0            Not Supported                                               |
+-----------------------------------------------------------------------------+
ewmayer is offline   Reply With Quote
Old 2014-07-25, 02:46   #50
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

101101111011002 Posts
Default

Updates:

o I couldn't find any way using nvidia-smi to shut the GPU fan back down after I did the above hardware diagnostics, so I simply waited for my ongoing Haswell Fermat-number run to hit its next savefile checkpoint and rebooted. Not a pretty solution, but it will serve for now (unless someone finds a better one). Perhaps I've been spoiled by the relative quietness of the Haswell fan - the case sits with side access panel removed (allowing easy access to the guts) on my desk, i.e. the CPU fan is literally 2 feet away from my head - but with all 4 cores blasting away, and the case fans nearest the CPU also running it's a quite tolerable noise level - nothing that interferes with me using a cellphone on the ear closest to the CPU. The GPU fan - and this is only an unloaded GT430! - was significantly louder than all 3 of the above fans together. But, on to config/compile issues.

o Looks like the "__CUDACC__ undefined" issue was due to some incorrect (in the context of GPU builds) nesting of #if-spaghetti in one of the headers used by my util.c ('ln -s'-aliased to util.cu) file. Next issue: nvcc is giving me errors about basic C typedefs, e.g.

gpu_iface.cu:30:3: warning: #warning using nvcc [-Wcpp]
gpu_iface.cu:32:4: warning: #warning device code trajectory [-Wcpp]
gpu_iface.cu:34:5: warning: #warning compiling with double precision [-Wcpp]
types.h(354): error: expected a ";"

1 error detected in the compilation of "/tmp/tmpxft_00002ffa_00000000-6_gpu_iface.cpp1.ii".

The error is for the first typedef in this snip, where I left-annotate with line numbers in the header file to make it easier to compare with the above nvcc error message:
Code:
352		#if __CUDA_ARCH__ > 120
353			#warning CUDA: compiling with double precision
354			typedef real double;
355			typedef double	vec_dbl;
356		#else
357			...
The "typedef real to double" idea came to me via a CUDA forum earlier today, where it was done via #define - and indeed, changing

typedef real double;

to

#define real double

fixes that error - but now I get same kind of error for the next typedef on the next line. I don't want to change every typedef in my code to a #define -- this is after all standard C syntax!
ewmayer is offline   Reply With Quote
Old 2014-07-25, 06:35   #51
xilman
Bamboozled!
 
xilman's Avatar
 
"𒉺𒌌𒇷𒆷𒀭"
May 2003
Down not across

2E1616 Posts
Default

Quote:
Originally Posted by ewmayer View Post
Updates:

o I couldn't find any way using nvidia-smi to shut the GPU fan back down after I did the above hardware diagnostics, so I simply waited for my ongoing Haswell Fermat-number run to hit its next savefile checkpoint and rebooted. Not a pretty solution, but it will serve for now (unless someone finds a better one). Perhaps I've been spoiled by the relative quietness of the Haswell fan - the case sits with side access panel removed (allowing easy access to the guts) on my desk, i.e. the CPU fan is literally 2 feet away from my head - but with all 4 cores blasting away, and the case fans nearest the CPU also running it's a quite tolerable noise level - nothing that interferes with me using a cellphone on the ear closest to the CPU. The GPU fan - and this is only an unloaded GT430! - was significantly louder than all 3 of the above fans together. But, on to config/compile issues.

o Looks like the "__CUDACC__ undefined" issue was due to some incorrect (in the context of GPU builds) nesting of #if-spaghetti in one of the headers used by my util.c ('ln -s'-aliased to util.cu) file. Next issue: nvcc is giving me errors about basic C typedefs, e.g.

gpu_iface.cu:30:3: warning: #warning using nvcc [-Wcpp]
gpu_iface.cu:32:4: warning: #warning device code trajectory [-Wcpp]
gpu_iface.cu:34:5: warning: #warning compiling with double precision [-Wcpp]
types.h(354): error: expected a ";"

1 error detected in the compilation of "/tmp/tmpxft_00002ffa_00000000-6_gpu_iface.cpp1.ii".

The error is for the first typedef in this snip, where I left-annotate with line numbers in the header file to make it easier to compare with the above nvcc error message:
Code:
352		#if __CUDA_ARCH__ > 120
353			#warning CUDA: compiling with double precision
354			typedef real double;
355			typedef double	vec_dbl;
356		#else
357			...
The "typedef real to double" idea came to me via a CUDA forum earlier today, where it was done via #define - and indeed, changing

typedef real double;

to

#define real double

fixes that error - but now I get same kind of error for the next typedef on the next line. I don't want to change every typedef in my code to a #define -- this is after all standard C syntax!
Idea based on the IOCCC:

#define typedef #define

Do so after any #include line ...

May not work but worth a try

Last fiddled with by xilman on 2014-07-25 at 08:11
xilman is offline   Reply With Quote
Old 2014-07-25, 07:56   #52
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

2×7×461 Posts
Default

Quote:
Originally Posted by ewmayer View Post
354 typedef real double;
355 typedef double vec_dbl;
I think you mean 'typedef double real;' for the first one (IE 'create a type called 'real' which is another name for double')
fivemack is offline   Reply With Quote
Old 2014-07-26, 01:22   #53
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

1175610 Posts
Default

Quote:
Originally Posted by xilman View Post
#define typedef #define
Same thought occurred to me, but no go, because typedefs are ;-terminated.

Quote:
Originally Posted by fivemack View Post
I think you mean 'typedef double real;' for the first one (IE 'create a type called 'real' which is another name for double')
I was simply flailing around yesterday afternoon, the forum where I saw that had '#define real double' which briefly (and incorrectly) led me to surmise that maybe nvcc used an internal 'master' floating-point type called 'real'. Again, makes no sense except in a "fog of war" kind of way.

So could someone please give me a straight answer to the question: Does nvcc support c-standard typedefs, or not?

If so, why would it squawk about something as simple as my example above?

If not I will have no choice but to monkey with a bunch of header files and carve out "special nvcc preprocessor section", but will be distinctly un-pleased by the need to do so.

Update:

As most of the stuff in the above types.h file is unneeded for cuda compiles, I tried simply commenting out the include of that header in my gpu_iface.h file. Resulting compile gave just one missing typedef:

gpu_iface.h(74): error: identifier "int32" is undefined

That is here in that header:

typedef struct {
int32 num_gpu;
gpu_info_t gpu_info[MAX_GPU];
} gpu_config_t;

When I preceded that with the needed (i.e. copied over from types.h)

typedef int int32;

It works just fine.

So maybe the real issue is that nvcc - perhaps some aspect of its 2-step compilation? - doesn't like the kind of headers-including-headers nesting I use?
ewmayer is offline   Reply With Quote
Old 2014-07-26, 06:48   #54
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

5·23·31 Posts
Default

If this is cribbed from cuda_xface.h in the Msieve source, the 'int32' is a typedef that Msieve makes up, it is in no way standard. You may be thinking of int32_t, which you get by including stdint.h, and which IIRC nvcc supports happily.

Incidentally, the CUDA runtime API now has a device query function that gives you absolutely all config information, something that didn't exist when I wrote my own crappy device query code.

Last fiddled with by jasonp on 2014-07-26 at 06:51
jasonp is offline   Reply With Quote
Old 2014-07-26, 21:48   #55
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

1175610 Posts
Default

Quote:
Originally Posted by jasonp View Post
If this is cribbed from cuda_xface.h in the Msieve source, the 'int32' is a typedef that Msieve makes up, it is in no way standard. You may be thinking of int32_t, which you get by including stdint.h, and which IIRC nvcc supports happily.
The gpu_iface code is indeed cribbed from yours, but I have long had a full set of unambiguous-bitlength integer typedefs in my types.h header.

Quote:
Incidentally, the CUDA runtime API now has a device query function that gives you absolutely all config information, something that didn't exist when I wrote my own crappy device query code.
Thanks - will likely use that at some point, but right now am using the custom iface code as a means of testing my cuda tools install.

So, looking more closely at the sequence of typedefs in my types.h file: int32 is typedef'd right near the top and nvcc had no problem with it, nor others - except when it gets to the 'typedef double vec_dbl'.

Is vec_dbl perhaps an nvcc reserved word?

[Note: I don't really need a vector double type for cuda work since that will use the C scalar-double code I have in place - but in order to get SIMD and scalar-double code paths to play nice together, I find it useful to use a shared typedef (e.g. for allocs) which defaults to "length-1 vector' in the scalar case.]
ewmayer is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
How to install v25.9 on Linux? Unregistered Information & Answers 10 2018-07-04 09:13
CUDA Install errors...HELP...never mind petrw1 GPU Computing 2 2016-03-06 13:39
TF fetching/reporting toolkit for Linux swl551 PrimeNet 20 2014-06-19 15:00
CUDA Toolkit for OpenSUSE 11.2--gcc 4.5 and up are not supported patrik GPU Computing 9 2012-04-07 03:50
NVIDIA CUDA C toolkit for G80 GPU available dsouza123 Programming 2 2007-02-18 12:50

All times are UTC. The time now is 13:58.


Fri Jul 7 13:58:01 UTC 2023 up 323 days, 11:26, 0 users, load averages: 1.05, 1.15, 1.15

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔