mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU Computing (https://www.mersenneforum.org/forumdisplay.php?f=92)
-   -   CUDALucas (a.k.a. MaclucasFFTW/CUDA 2.3/CUFFTW) (https://www.mersenneforum.org/showthread.php?t=12576)

flashjh 2014-04-30 17:38

CUDALucas 2.05Beta r68 is [URL="https://sourceforge.net/projects/cudalucas/files/2.05%20Beta/"]posted[/URL]

Includes CUDA 4.2, 5.0, 5.5 & 6.0. The CUDA 6.0 version also has the new SM 3.2, though I don't know what card it's for yet.

pdazzl 2014-05-13 17:44

Following up, I know someone said there's an nvidia bug that causes this api reset issue. I'm also starting to wonder if this could also be a heat related. I pulled up EVGA Precision X while running cuda lucas and noticed the card was in the upper 80 degree celsius with the fan speed set to auto and fan just going around 30-60%. I've statically set my fan speed to around 70% which has the card running at a much cooler upper 70 celsius and I'm not seeing the card reset so far.

Also noticed when running mfakct that the fan on the card kicks up to high gear right away when starting up, I wonder if that's something that could be done in cuda lucas as well.



[QUOTE=pdazzl;370117]Thanks for the restart batch file.

I am getting the API runtime errors, even with the latest beta build r65 (running toolkit 5.0 and latest 335.23 nvidia drivers )....however this is only happening on my gtx 570, not my 280. I have noticed that the 570 will run stable until I stop the job and go to mfaktc and then switch back to the LL job. It'll continue happening until I reboot my box. So far that seems to be what triggers the API errors for me. I have never seen this behavior on my 280 even when switching between cuda lucas and mfaktc.[/QUOTE]

kladner 2014-05-16 01:18

[QUOTE=pdazzl;373362]Following up, I know someone said there's an nvidia bug that causes this api reset issue. I'm also starting to wonder if this could also be a heat related. I pulled up EVGA Precision X while running cuda lucas and [B]noticed the card was in the upper 80 degree celsius with the fan speed set to auto and fan just going around 30-60%.[/B] I've statically set my fan speed to around 70% which has the card running at a much cooler upper 70 celsius and I'm not seeing the card reset so far.

Also noticed when running mfakct that the fan on the card kicks up to high gear right away when starting up, I wonder if that's something that could be done in cuda lucas as well.[/QUOTE]

In my experience, the nvidia default auto fan speeds are grossly low. It is possible that this began many driver versions back when there was a flurry of reports that one version was a card-killer.

I use MSI Afterburner and set up a custom fan curve which maintains healthier temperatures.

HHfromG 2014-05-25 12:47

[QUOTE=flashjh;372389]CUDALucas 2.05Beta r68 is [URL="https://sourceforge.net/projects/cudalucas/files/2.05%20Beta/"]posted[/URL]

Includes CUDA 4.2, 5.0, 5.5 & 6.0. The CUDA 6.0 version also has the new SM 3.2, though I don't know what card it's for yet.[/QUOTE]


Hi, does anybody know when there will be a stable version of CUDALucas 2.05 available?

LaurV 2014-05-26 03:39

[QUOTE=HHfromG;374240]Hi, does anybody know when there will be a stable version of CUDALucas 2.05 available?[/QUOTE]
Take the beta, it works great. Be careful with the cuda version, the wrong one will result in few percents penalty in speed.

HHfromG 2014-05-26 20:35

Hi, I have already used the "beta version" together with CUDA 6.0 Toolkit. The increase of performance was about 8% compared with the same calculation using the CUDALucas_2.03 (stable) version. That leads me to the following question:
1) does PrimeNet and/or GIMPS accept results produced by a "beta" version ?
2) who decides when a beta version becomes a stable version and what are the criteria for this decision?

Regards...

HHfromG 2014-05-26 20:36

[QUOTE=LaurV;374287]Take the beta, it works great. Be careful with the cuda version, the wrong one will result in few percents penalty in speed.[/QUOTE]
Hi, I have already used the "beta version" together with CUDA 6.0 Toolkit. The increase of performance was about 8% compared with the same calculation using the CUDALucas_2.03 (stable) version. That leads me to the following question:
1) does PrimeNet and/or GIMPS accept results produced by a "beta" version ?
2) who decides when a beta version becomes a stable version and what are the criteria for this decision?

Regards...

owftheevil 2014-05-27 14:56

There are a couple of bugs that affect compute 3.0 and 3.5 cards with large (>4M) ffts and two short sections of the documentation I want to get fixed before 2.05 is released. I will actually have time to work on it starting the second week of June.

GIMPS does accept results from 2.05 beta.

HHfromG 2014-05-29 10:21

[QUOTE=owftheevil;374380]There are a couple of bugs that affect compute 3.0 and 3.5 cards with large (>4M) ffts and two short sections of the documentation I want to get fixed before 2.05 is released. I will actually have time to work on it starting the second week of June.

GIMPS does accept results from 2.05 beta.[/QUOTE]

Hi, thank you for this information. Will the new 2.05 version also support the new features of the CUDA 6.0 Toolkit? Especially the concept of "Unified Memory" and "cufft as Drop in Library"? And - because I use 2 NVidia GTX 690 cards for CUDALucas - I would be very much intrested in a version that supports "Multri-GPU Scaling" wich is also a new feature of the CUDA 6.0 Toolkit.

owftheevil 2014-05-29 14:13

HHfromG, none of the new 6.0 features seem to be particularly useful for CUDALucas. The unified memory would make some of the code simpler, but would not otherwise give any improvements. There are very few host<->device memory tranfers going on. CUDALucas already uses CUFFT for all the ffts and the slowness of device<->device memory transfers makes multi-gpu ffts impractical.

GhettoChild 2014-07-13 11:58

I'm highly confused with what version CUDALucas I should be using. I have a GTX 295 (Tesla based dual GT200b chips), The PDFGuide shows only CL v2.03 CUDA 3.2 & SM 13 (Shader Model 1.3?) is for GPUs older than GF110 Fermi chips. The readme also states not to use Alpha or Beta releases. I have not seen a CL v2.05 with CUDA 3.2 & SM 13 at all, v2.04 is no where to be found online and there is no list of supported hardware per version either. I would appreciate some advice, thank you.


All times are UTC. The time now is 23:07.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.