mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU Computing (https://www.mersenneforum.org/forumdisplay.php?f=92)
-   -   CUDALucas (a.k.a. MaclucasFFTW/CUDA 2.3/CUFFTW) (https://www.mersenneforum.org/showthread.php?t=12576)

storm5510 2017-09-11 16:10

[QUOTE=kriesel;466758]....Are you eying a nice shiny new fast GTX1080 Ti?[/QUOTE]

I finally got to take a measurement. 12.25" will fit easily. This is leaving around 0.15" of fan clearance in the front. This is measuring from the inside mounting point going forward. :smile:

sergiu 2017-11-06 03:14

Hello,
I'm trying to run CUDALucas-2.05.1-CUDA6.5-linux-x86_64 on a Ubuntu Server 16.04 x86-64 freshly installed with 4 GPUs however I always get "-bash: ./CUDALucas-2.05.1-CUDA6.5-linux-x86_64: No such file or directory"

I've added CUDALucas.ini and I've copied libcudart for 6.5 version in same directory. I've installed CUDA 9 then from aptitude I've installed nvidia-cuda-toolkit. None of the steps had any impact.

I'd greatly appreciate if someone could point what I am missing. Thanks! pre { direction: ltr; color: rgb(0, 0, 0); }pre.western { font-family: "Liberation Mono","Courier New",monospace; }pre.cjk { font-family: "Droid Sans Fallback",monospace; }pre.ctl { font-family: "Liberation Mono","Courier New",monospace; }p { margin-bottom: 0.1in; direction: ltr; color: rgb(0, 0, 0); line-height: 120%; }p.western { font-family: "Liberation Serif","Times New Roman",serif; font-size: 12pt; }p.cjk { font-family: "Droid Sans Fallback"; font-size: 12pt; }p.ctl { font-family: "FreeSans"; font-size: 12pt; }code.western { font-family: "Liberation Mono","Courier New",monospace; }code.cjk { font-family: "Droid Sans Fallback",monospace; }code.ctl { font-family: "Liberation Mono","Courier New",monospace; }a:link { }

Mark Rose 2017-11-06 13:31

chmod a+x CUDALucas-2.05.1-CUDA6.5-linux-x86_64

Then try again.

sergiu 2017-11-09 20:50

Thank you for the hint but it had no effect whatsoever. Anything else that I can try?

GP2 2017-11-10 02:56

[QUOTE=sergiu;471432]Thank you for the hint but it had no effect whatsoever. Anything else that I can try?[/QUOTE]

What is the output of
uname -a

What is the output of
file CUDALucas-2.05.1-CUDA6.5-linux-x86_64

What is the output of
ldd CUDALucas-2.05.1-CUDA6.5-linux-x86_64

If all else fails, you can grab the source code and compile it yourself.

sergiu 2017-11-10 09:51

Here:

uname -a
Linux Production 4.4.0-98-generic #121-Ubuntu SMP Tue Oct 10 14:24:03 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

file CUDALucas-2.05.1-CUDA6.5-linux-x86_64
CUDALucas-2.05.1-CUDA6.5-linux-x86_64: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=a8b4728865a4f5a480dd218c33fd85728a4914c3, not stripped

ldd CUDALucas-2.05.1-CUDA6.5-linux-x86_64
linux-vdso.so.1 => (0x00007ffdd18cc000)
libcufft.so.6.5 => ./libcufft.so.6.5 (0x00007fc3a6ca7000)
libcudart.so.6.5 => ./libcudart.so.6.5 (0x00007fc3a6a57000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fc3a674e000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fc3a6384000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fc3a6180000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fc3a5f63000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fc3a5d5b000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fc3a59d9000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fc3a57c3000)
/lib/ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 (0x00007fc3a96cb000)

GP2 2017-11-10 10:48

[QUOTE=sergiu;471466]Here:[/QUOTE]

Well that rules out a couple of ideas I had.

Maybe in bash try running
[CODE]
hash -r
[/CODE]
and then try running the executable again?

Other than that, the only thing I can think of is to try compiling it yourself from source code.

chris2be8 2017-11-10 16:22

What is the output from:
ls -l CUDALucas-2.05.1-CUDA6.5-linux-x86_64
ls -ld .

(This is checking the obvious, but that's not a bad place to start.)

Chris

kriesel 2017-12-01 17:48

multiple instances per GPU
 
Hi,

Has anyone experimented with running more than one instance of CUDALucas on a single GPU?

Reason I ask is I'm used to seeing 100% GPU load in GPU-Z, with a single instance of CUDALucas or CUDAPm1 per GPU, but on a GTX1070 it varies 99-100%. Also I have found gains in running multiple Mfaktc instances, raising the GPU load from 98 to 100%, on a GTX480.

In sharing a single GTX480 GPU between simultaneous single instances of CUDALucas and CUDAPm1, in a quick test, I'm calculating more combined throughput than either running alone, by several percent. Since I'm running numerous GPUs, if that holds up, it's the equivalent of adding another GPU.

Any light you can shed on effects of multiple instances, such as confirming results, or negative results, on various GPU models, would be appreciated.

kladner 2017-12-02 02:08

In the days of CPU Sieving, one core of a Sandy Bridge could come close to saturating a GTX460. A 1090T needed two cores devoted to mfaktc to get the same usage. I believe that one ran multiple instances of mfaktc in different directories. The instances shared the GPU.
Excuse me. I address a different program than that of this thread. The comment might still be valid, however.

kriesel 2017-12-02 17:42

[QUOTE=kladner;472905]In the days of CPU Sieving, one core of a Sandy Bridge could come close to saturating a GTX460. A 1090T needed two cores devoted to mfaktc to get the same usage. I believe that one ran multiple instances of mfaktc in different directories. The instances shared the GPU.
Excuse me. I address a different program than that of this thread. The comment might still be valid, however.[/QUOTE]

Yes, I recall seeing something about that back when I read through the forum thread about the early days of GPU factoring. I'm seeing some benefit to multiple instances of a current version of mfaktc with GPU sieving, while the system cpus are essentially fully occupied with prime95 and almost unutilized by GPU apps. There's also an advantage in some cases with dissimilar programs. More testing to do. I suppose I should look at whether prime95 throughput is taking a hit with multiple gpu app instances.


All times are UTC. The time now is 22:30.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.