![]() |
|
|
#2674 |
|
"Teal Dulcet"
Jun 2018
428 Posts |
I wrote a script to download, setup and run CUDALucas on Linux. It also downloads, sets up and runs the Python script from Mlucas for automated PrimeNet assignments: https://github.com/tdulcet/Distribut...ipts#cudalucas
If the required dependencies (Subversion and the CUDA Toolkit) are already installed, it should work on any Linux distribution. Otherwise, it will install the required dependencies on Ubuntu. Pull requests are welcome! This should satisfy part of the of the "Install script" (number 46) in the CUDALucas wishlist table. There is also a separate script to download, setup and run Prime95 on Linux: https://github.com/tdulcet/Distribut...#prime95mprime |
|
|
|
|
|
#2675 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
31×173 Posts |
Quote:
|
|
|
|
|
|
|
#2676 |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
31×173 Posts |
I was offered "a blog area to consolidate all of your pdfs and guides and stuff" and accepted.
Feel free to have a look and suggest content. (G-rated only;) General interest gpu related reference material http://www.mersenneforum.org/showthread.php?t=23371 CUDALucas Lucas-Lehmer primality testing with CUDA on gpus http://www.mersenneforum.org/showthread.php?t=23387 Future updates to material previously posted in this thread (bug and wish list, etc.) and posting of new reference material will probably occur on the blog threads and not here. Having in-place update without a time limit makes it more manageable there. Links to things like the bug and wish list post will remain constant and be updated in place occasionally. There's a modest update to the bug and wish list there now. |
|
|
|
|
|
#2677 |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
31×173 Posts |
While looking for something else, I stumbled across this:
The source of parse.c for CUDAPm1 indicates # or \\ or / are comment characters marking the rest of a worktodo line as a comment I've confirmed by test in CUDALucas v2.06beta that # or \\ but not / work in my test, which placed them mostly at the beginnings of records. I could tell by the line number in any warning messages which did or did not work. The capability is not present in the readme.txt (yet) that I recall. |
|
|
|
|
|
#2678 |
|
Einyen
Dec 2003
Denmark
35·13 Posts |
I tried to compile CUDALucas svn on Amazon p3.2xlarge with Tesla V100.
I tried with the Deep Learning Amazon Linux image and the Deep Learning Ubuntu image, both has CUDA Version 9.2.88. I changed the path to the CUDA 9.2 installation and changed line in the makefile to "arch=compute_70,code=sm_70" (I also tried 50 and 52) But I get these errors, any ideas?: Code:
[ec2-user@ip-172-31-29-42 cudalucas]$ make /usr/local/cuda/bin/nvcc -O1 --generate-code arch=compute_70,code=sm_70 --compiler-options=-Wall -I/usr/local/cuda/include -c CUDALucas.cu CUDALucas.cu(756): error: identifier "nvmlInit" is undefined CUDALucas.cu(757): error: identifier "nvmlDevice_t" is undefined CUDALucas.cu(758): error: identifier "nvmlDeviceGetHandleByIndex" is undefined CUDALucas.cu(759): error: identifier "nvmlDeviceGetUUID" is undefined CUDALucas.cu(760): error: identifier "nvmlShutdown" is undefined CUDALucas.cu(3430): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3432): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3434): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3436): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3438): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3440): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3442): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3444): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3446): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3448): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3450): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3452): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3454): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3456): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3458): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3460): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3462): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3462): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3464): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3464): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3467): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3467): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3469): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3469): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3471): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3471): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3473): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3475): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3475): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3995): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3995): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3996): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3996): warning: conversion from a string literal to "char *" is deprecated CUDALucas.cu(3997): warning: conversion from a string literal to "char *" is deprecated 5 errors detected in the compilation of "/tmp/tmpxft_00000b18_00000000-6_CUDALucas.cpp1.ii". make: *** [CUDALucas.o] Error 1 [ec2-user@ip-172-31-29-42 cudalucas]$ Last fiddled with by ATH on 2018-08-09 at 03:32 |
|
|
|
|
|
#2679 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
31×173 Posts |
Quote:
|
|
|
|
|
|
|
#2680 | |
|
Einyen
Dec 2003
Denmark
1100010101112 Posts |
Quote:
|
|
|
|
|
|
|
#2681 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
31·173 Posts |
Quote:
Or has nvml.h been modified, or is it finding one that's not a version match? Permissions ok? Can you tell whether it's being found, read, and processed? Last fiddled with by kriesel on 2018-08-09 at 15:54 |
|
|
|
|
|
|
#2682 | |
|
Sep 2003
5×11×47 Posts |
I used CUDALucas v2.05.1 on Amazon Deep Learning Base AMI and got it to work (passes self-tests -r 0 and -r 1 and finds M6972593 prime in about 11 or 12 minutes).
First you have to change the symbolic link: Code:
ls -l /usr/local/cuda ### ---> will be cuda9.0, which is no good! sudo rm /usr/local/cuda; sudo ln -s /usr/local/cuda-9.2 /usr/local/cuda This affects Deep Learning Base AMI version 9.0, maybe it will be fixed by the time future readers read this. If you get the error gcc: error trying to exec 'cc1plus': execvp: No such file or directory at link time when compiling and linking any program, even "hello world", then do this: Code:
alternatives --display gcc ### ---> will be gcc72 which causes a problem at link time for all programs sudo alternatives --set gcc "/usr/bin/gcc48" Then in the CUDALucas source and Makefile, make these changes: Code:
diff Makefile.orig Makefile 23c23 < CUFLAGS = -O$(OptLevel) --generate-code arch=compute_35,code=sm_35 --compiler-options=-Wall -I$(CUINC) --- > CUFLAGS = -O$(OptLevel) --generate-code arch=compute_70,code=sm_70 --compiler-options=-Wall -I$(CUINC) Code:
diff CUDALucas.cu.orig CUDALucas.cu 755c755 < #ifndef WIN_ENVIRONMENT32 //no 32-bit win support for NVML --- > #ifdef WIN_ENVIRONMENT64 //no 32-bit win support for NVML ![]() Quote:
Last fiddled with by GP2 on 2018-08-09 at 17:11 |
|
|
|
|
|
|
#2683 |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
31×173 Posts |
As I recall, 2.05.1 lacks a check for at least one flavor of bad residue that 2.06beta May 5 2017 includes. So check your results and the corresponding logs.
Last fiddled with by kriesel on 2018-08-09 at 19:14 |
|
|
|
|
|
#2684 | |
|
Einyen
Dec 2003
Denmark
35×13 Posts |
#ifdef WIN_ENVIRONMENT64 did not work, but I just commented out each nvml line and then it worked, thanks
![]() It passed the long self test -r 1 and found M6972593 to be prime. I ran a cufftbench all the way from 1K to 32768K with 50 iterations: ./CUDALucas -cufftbench 1 32768 50 and then a threadbench on those best fft lengths again with 50 iterations: ./CUDALucas -threadbench 1 32768 50 1 If anyone is interested, here are the outputs from those runs: Tesla V100 cufftbench1K-32768K.txt Tesla V100 threadbench1K-32768K.txt and then the actual files produced which CUDALucas uses: Tesla V100-SXM2-16GB fft.txt Tesla V100-SXM2-16GB threads.txt Btw I got the error: Quote:
Deep Learning AMI (Amazon Linux) Version 12.0 (ami-45655f20) but it seems to be just because g++ is not installed, doing: sudo yum install gcc72-c++.x86_64 -y fixed it. Last fiddled with by ATH on 2018-08-10 at 01:53 |
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Don't DC/LL them with CudaLucas | LaurV | Data | 131 | 2017-05-02 18:41 |
| CUDALucas / cuFFT Performance on CUDA 7 / 7.5 / 8 | Brain | GPU Computing | 13 | 2016-02-19 15:53 |
| CUDALucas: which binary to use? | Karl M Johnson | GPU Computing | 15 | 2015-10-13 04:44 |
| settings for cudaLucas | fairsky | GPU Computing | 11 | 2013-11-03 02:08 |
| Trying to run CUDALucas on Windows 8 CP | Rodrigo | GPU Computing | 12 | 2012-03-07 23:20 |