![]() |
|
|
#2169 | |
|
Apr 2014
7×17 Posts |
Following up, I know someone said there's an nvidia bug that causes this api reset issue. I'm also starting to wonder if this could also be a heat related. I pulled up EVGA Precision X while running cuda lucas and noticed the card was in the upper 80 degree celsius with the fan speed set to auto and fan just going around 30-60%. I've statically set my fan speed to around 70% which has the card running at a much cooler upper 70 celsius and I'm not seeing the card reset so far.
Also noticed when running mfakct that the fan on the card kicks up to high gear right away when starting up, I wonder if that's something that could be done in cuda lucas as well. Quote:
|
|
|
|
|
|
|
#2170 | |
|
"Kieren"
Jul 2011
In My Own Galaxy!
2·3·1,693 Posts |
Quote:
I use MSI Afterburner and set up a custom fan curve which maintains healthier temperatures. |
|
|
|
|
|
|
#2171 | |
|
Feb 2014
Germany
5 Posts |
Quote:
Hi, does anybody know when there will be a stable version of CUDALucas 2.05 available? |
|
|
|
|
|
|
#2172 |
|
Romulan Interpreter
Jun 2011
Thailand
966310 Posts |
|
|
|
|
|
|
#2173 |
|
Feb 2014
Germany
5 Posts |
Hi, I have already used the "beta version" together with CUDA 6.0 Toolkit. The increase of performance was about 8% compared with the same calculation using the CUDALucas_2.03 (stable) version. That leads me to the following question:
1) does PrimeNet and/or GIMPS accept results produced by a "beta" version ? 2) who decides when a beta version becomes a stable version and what are the criteria for this decision? Regards... |
|
|
|
|
|
#2174 | |
|
Feb 2014
Germany
516 Posts |
Quote:
1) does PrimeNet and/or GIMPS accept results produced by a "beta" version ? 2) who decides when a beta version becomes a stable version and what are the criteria for this decision? Regards... |
|
|
|
|
|
|
#2175 |
|
"Carl Darby"
Oct 2012
Spring Mountains, Nevada
32×5×7 Posts |
There are a couple of bugs that affect compute 3.0 and 3.5 cards with large (>4M) ffts and two short sections of the documentation I want to get fixed before 2.05 is released. I will actually have time to work on it starting the second week of June.
GIMPS does accept results from 2.05 beta. |
|
|
|
|
|
#2176 | |
|
Feb 2014
Germany
5 Posts |
Quote:
|
|
|
|
|
|
|
#2177 |
|
"Carl Darby"
Oct 2012
Spring Mountains, Nevada
32×5×7 Posts |
HHfromG, none of the new 6.0 features seem to be particularly useful for CUDALucas. The unified memory would make some of the code simpler, but would not otherwise give any improvements. There are very few host<->device memory tranfers going on. CUDALucas already uses CUFFT for all the ffts and the slowness of device<->device memory transfers makes multi-gpu ffts impractical.
Last fiddled with by owftheevil on 2014-05-29 at 14:23 |
|
|
|
|
|
#2178 |
|
"Ghetto_Child"
Jul 2014
Montreal, QC, Canada
41 Posts |
I'm highly confused with what version CUDALucas I should be using. I have a GTX 295 (Tesla based dual GT200b chips), The PDFGuide shows only CL v2.03 CUDA 3.2 & SM 13 (Shader Model 1.3?) is for GPUs older than GF110 Fermi chips. The readme also states not to use Alpha or Beta releases. I have not seen a CL v2.05 with CUDA 3.2 & SM 13 at all, v2.04 is no where to be found online and there is no list of supported hardware per version either. I would appreciate some advice, thank you.
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Don't DC/LL them with CudaLucas | LaurV | Data | 131 | 2017-05-02 18:41 |
| CUDALucas / cuFFT Performance on CUDA 7 / 7.5 / 8 | Brain | GPU Computing | 13 | 2016-02-19 15:53 |
| CUDALucas: which binary to use? | Karl M Johnson | GPU Computing | 15 | 2015-10-13 04:44 |
| settings for cudaLucas | fairsky | GPU Computing | 11 | 2013-11-03 02:08 |
| Trying to run CUDALucas on Windows 8 CP | Rodrigo | GPU Computing | 12 | 2012-03-07 23:20 |