![]() |
|
|
#321 |
|
Jul 2009
Tokyo
10011000102 Posts |
Thank you mdettweiler,
this news is good news
|
|
|
|
|
|
#323 |
|
Jul 2003
So Cal
2,663 Posts |
Since it's limited to only power-of-2 FFTs, doing double checks around 35-36.5M is the most efficient. Just be sure it doesn't switch over to the 4M FFT.
|
|
|
|
|
|
#324 | |
|
A Sunny Moo
Aug 2007
USA
2·47·67 Posts |
Quote:
Meanwhile, now that I've got MacLucasFFTW set up on this GPU and have confirmed that it's working right, I am available to help test an version of MacLucasFFTW modified to perform LLR tests. Can any of the CUDA gurus out there take a guess at what exactly would be involved in making such a modification? (I tried re-hardcoding the u0 value manually for a specific LLR test and feeding MacLucasFFTW the number's exponent, but it didn't work--the number is a known prime and it came up composite. I suppose this isn't exactly surprising, since I'm surely oversimplifying the matter by a long shot.) Alternatively, as Ken_g6 suggested a number of posts back, it might possibly be easier to just make a new program from scratch based on the FFTW-CUDA library that performs Fermat PRP tests. Again, I admit I'm entirely clueless as to how much work would be involved in this. But if it could be done, the result would be even more useful than a CUDA LLR program, since it could be used for any k*b^n+c (as opposed to an LLR test which only works for k*2^n-1). |
|
|
|
|
|
|
#325 | |||||
|
Jun 2010
23×3×11 Posts |
Quote:
Request: http://www.mersenneforum.org/showpos...&postcount=177 Quote:
http://www.mersenneforum.org/showpos...&postcount=208 Quote:
http://www.mersenneforum.org/showpos...&postcount=274 Quote:
http://www.mersenneforum.org/showpos...&postcount=324 Quote:
|
|||||
|
|
|
|
|
#326 | |
|
A Sunny Moo
Aug 2007
USA
142328 Posts |
Quote:
|
|
|
|
|
|
|
#327 |
|
Mar 2010
43 Posts |
The development of GPU clients for LLR is a terrible idea. It's like the Prisoner's Dilemma:
http://en.wikipedia.org/wiki/Prisoner%27s_dilemma Let's say there are two groups of people - those with good GPUs, and those without one. Let's call them Group A and Group B. At first, there's no GPU LLR client. Group A has 2500 primes on the top 5000 list, and Group B has 2500. One day, a GPU LLR client is released. Group A seizes the opportunity to grab a lead in the top 5000 list, and they put all of their GPUs to work. Group B sees that their primes are quickly beginning to get wiped off the top 5000 list, so they buy GPUs and run them to prevent this from happening. So now we're back to square one, and both groups each have 2500 primes on the top 5000 list, like before. But they are now worse off. Members of group B each have to spend hundreds of dollars to get good GPUs, and the power consumption of both groups have more than tripled. None of the crunchers are happy after seeing their electric bill go up, and they'll have to live with that each month until they retire from prime-finding DC projects. If that ever happens, the person we'll have to blame for that mess will be msft. Last fiddled with by Historian on 2010-09-23 at 06:43 |
|
|
|
|
|
#328 |
|
Jul 2003
So Cal
266310 Posts |
|
|
|
|
|
|
#329 | |
|
Mar 2010
2B16 Posts |
Quote:
On the other hand, a possible transition from CPUs to GPUs will be very abrupt and have huge sudden jumps in power consumption. Last fiddled with by Historian on 2010-09-23 at 07:18 |
|
|
|
|
|
|
#330 | |||
|
Mar 2010
2B16 Posts |
Quote:
Quote:
Quote:
|
|||
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Don't DC/LL them with CudaLucas | LaurV | Data | 131 | 2017-05-02 18:41 |
| CUDALucas / cuFFT Performance on CUDA 7 / 7.5 / 8 | Brain | GPU Computing | 13 | 2016-02-19 15:53 |
| CUDALucas: which binary to use? | Karl M Johnson | GPU Computing | 15 | 2015-10-13 04:44 |
| settings for cudaLucas | fairsky | GPU Computing | 11 | 2013-11-03 02:08 |
| Trying to run CUDALucas on Windows 8 CP | Rodrigo | GPU Computing | 12 | 2012-03-07 23:20 |