![]() |
|
|
#45 | |
|
Aug 2002
2·32·13·37 Posts |
Quote:
|
|
|
|
|
|
|
#46 |
|
∂2ω=0
Sep 2002
República de California
22×2,939 Posts |
OK, all 3 .run files mentioned above now have been run successfully, after reboot, 'nvcc -V' gives
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2013 NVIDIA Corporation Built on Wed_Jul_17_18:36:13_PDT_2013 Cuda compilation tools, release 5.5, V5.5.0 Attempted nvcc compile of a couple basic sourcefiles indcates all is not well, however: ewmayer@derek:~/Mlucas/SRC$ nvcc -c -O3 -DUSE_GPU util.cu gpu_iface.cu In file included from imul_macro.h:29, from mi64.h:30, from util.h:32, from util.cu:23: imul_macro0.h:347:4: error: #error unknown compiler for AMD64. util.cu:1080:5: error: #error unsupported compiler type for x87! Those preprocessor errors are due to the expected compiler-macro __CUDACC__ not being defined. So I tried using the list-predefines method described in post #4 in this thread; that doesn't work for me, though: ewmayer@derek:~/Mlucas/SRC$ strings nvcc | grep [-]D strings: 'nvcc': No such file |
|
|
|
|
|
#47 |
|
"/X\(‘-‘)/X\"
Jan 2013
https://pedan.tech/
C7016 Posts |
Looks like the CUDA headers aren't in the compiler's default header path.
Try running nvcc with -L/usr/local/cuda-5.5/lib64 |
|
|
|
|
|
#48 | |
|
∂2ω=0
Sep 2002
República de California
22×2,939 Posts |
Quote:
The lib32 and lib64 dirs are definitely there: Code:
ewmayer@derek:~/Mlucas/SRC$ l /usr/local/cuda-5.5/lib64 libcublas_device.a libcudart.so.5.5 libcufftw.so libcurand.so libnppc.so libnpps.so libcublas.so libcudart.so.5.5.22 libcufftw.so.5.5 libcurand.so.5.5 libnppc.so.5.5 libnpps.so.5.5 libcublas.so.5.5 libcudart_static.a libcufftw.so.5.5.22 libcurand.so.5.5.22 libnppc.so.5.5.22 libnpps.so.5.5.22 libcublas.so.5.5.22 libcufft.so libcuinj64.so libcusparse.so libnppi.so libnvToolsExt.so libcudadevrt.a libcufft.so.5.5 libcuinj64.so.5.5 libcusparse.so.5.5 libnppi.so.5.5 libnvToolsExt.so.1 libcudart.so libcufft.so.5.5.22 libcuinj64.so.5.5.22 libcusparse.so.5.5.22 libnppi.so.5.5.22 libnvToolsExt.so.1.0.0 Last fiddled with by ewmayer on 2014-07-23 at 01:50 Reason: lib --> lib32 |
|
|
|
|
|
|
#49 |
|
∂2ω=0
Sep 2002
República de California
22·2,939 Posts |
p.s.: Mike just e-mailed to ask if "nvidia-smi" returns anything yet.
Yes - see below. Also the GPU fan kicked in from that, is still going a few mins later - shouldn't that die down after a few seconds, since there is no process running? Code:
ewmayer@derek:~/Mlucas/SRC$ nvidia-smi
Tue Jul 22 11:16:35 2014
+------------------------------------------------------+
| NVIDIA-SMI 5.319.37 Driver Version: 319.37 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GT 430 Off | 0000:01:00.0 N/A | N/A |
| 65% 32C N/A N/A / N/A | 3MB / 1023MB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Compute processes: GPU Memory |
| GPU PID Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
|
|
|
|
|
|
#50 |
|
∂2ω=0
Sep 2002
República de California
101101111011002 Posts |
Updates:
o I couldn't find any way using nvidia-smi to shut the GPU fan back down after I did the above hardware diagnostics, so I simply waited for my ongoing Haswell Fermat-number run to hit its next savefile checkpoint and rebooted. Not a pretty solution, but it will serve for now (unless someone finds a better one). Perhaps I've been spoiled by the relative quietness of the Haswell fan - the case sits with side access panel removed (allowing easy access to the guts) on my desk, i.e. the CPU fan is literally 2 feet away from my head - but with all 4 cores blasting away, and the case fans nearest the CPU also running it's a quite tolerable noise level - nothing that interferes with me using a cellphone on the ear closest to the CPU. The GPU fan - and this is only an unloaded GT430! - was significantly louder than all 3 of the above fans together. But, on to config/compile issues. o Looks like the "__CUDACC__ undefined" issue was due to some incorrect (in the context of GPU builds) nesting of #if-spaghetti in one of the headers used by my util.c ('ln -s'-aliased to util.cu) file. Next issue: nvcc is giving me errors about basic C typedefs, e.g. gpu_iface.cu:30:3: warning: #warning using nvcc [-Wcpp] gpu_iface.cu:32:4: warning: #warning device code trajectory [-Wcpp] gpu_iface.cu:34:5: warning: #warning compiling with double precision [-Wcpp] types.h(354): error: expected a ";" 1 error detected in the compilation of "/tmp/tmpxft_00002ffa_00000000-6_gpu_iface.cpp1.ii". The error is for the first typedef in this snip, where I left-annotate with line numbers in the header file to make it easier to compare with the above nvcc error message: Code:
352 #if __CUDA_ARCH__ > 120 353 #warning CUDA: compiling with double precision 354 typedef real double; 355 typedef double vec_dbl; 356 #else 357 ... typedef real double; to #define real double fixes that error - but now I get same kind of error for the next typedef on the next line. I don't want to change every typedef in my code to a #define -- this is after all standard C syntax! |
|
|
|
|
|
#51 | |
|
Bamboozled!
"𒉺𒌌𒇷𒆷𒀭"
May 2003
Down not across
2E1616 Posts |
Quote:
#define typedef #define Do so after any #include line ... May not work but worth a try
Last fiddled with by xilman on 2014-07-25 at 08:11 |
|
|
|
|
|
|
#52 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
2×7×461 Posts |
|
|
|
|
|
|
#53 | |
|
∂2ω=0
Sep 2002
República de California
1175610 Posts |
Same thought occurred to me, but no go, because typedefs are ;-terminated.
Quote:
So could someone please give me a straight answer to the question: Does nvcc support c-standard typedefs, or not? If so, why would it squawk about something as simple as my example above? If not I will have no choice but to monkey with a bunch of header files and carve out "special nvcc preprocessor section", but will be distinctly un-pleased by the need to do so. Update: As most of the stuff in the above types.h file is unneeded for cuda compiles, I tried simply commenting out the include of that header in my gpu_iface.h file. Resulting compile gave just one missing typedef: gpu_iface.h(74): error: identifier "int32" is undefined That is here in that header: typedef struct { int32 num_gpu; gpu_info_t gpu_info[MAX_GPU]; } gpu_config_t; When I preceded that with the needed (i.e. copied over from types.h) typedef int int32; It works just fine. So maybe the real issue is that nvcc - perhaps some aspect of its 2-step compilation? - doesn't like the kind of headers-including-headers nesting I use? |
|
|
|
|
|
|
#54 |
|
Tribal Bullet
Oct 2004
5·23·31 Posts |
If this is cribbed from cuda_xface.h in the Msieve source, the 'int32' is a typedef that Msieve makes up, it is in no way standard. You may be thinking of int32_t, which you get by including stdint.h, and which IIRC nvcc supports happily.
Incidentally, the CUDA runtime API now has a device query function that gives you absolutely all config information, something that didn't exist when I wrote my own crappy device query code. Last fiddled with by jasonp on 2014-07-26 at 06:51 |
|
|
|
|
|
#55 | ||
|
∂2ω=0
Sep 2002
República de California
1175610 Posts |
Quote:
Quote:
So, looking more closely at the sequence of typedefs in my types.h file: int32 is typedef'd right near the top and nvcc had no problem with it, nor others - except when it gets to the 'typedef double vec_dbl'. Is vec_dbl perhaps an nvcc reserved word? [Note: I don't really need a vector double type for cuda work since that will use the C scalar-double code I have in place - but in order to get SIMD and scalar-double code paths to play nice together, I find it useful to use a shared typedef (e.g. for allocs) which defaults to "length-1 vector' in the scalar case.] |
||
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| How to install v25.9 on Linux? | Unregistered | Information & Answers | 10 | 2018-07-04 09:13 |
| CUDA Install errors...HELP...never mind | petrw1 | GPU Computing | 2 | 2016-03-06 13:39 |
| TF fetching/reporting toolkit for Linux | swl551 | PrimeNet | 20 | 2014-06-19 15:00 |
| CUDA Toolkit for OpenSUSE 11.2--gcc 4.5 and up are not supported | patrik | GPU Computing | 9 | 2012-04-07 03:50 |
| NVIDIA CUDA C toolkit for G80 GPU available | dsouza123 | Programming | 2 | 2007-02-18 12:50 |