![]() |
|
|
#210 | |
|
A Sunny Moo
Aug 2007
USA (GMT-5)
141518 Posts |
Quote:
LLR's home page is http://jpenne.free.fr/index2.html; from there, you can download the source code for LLR 3.8.1 (CPU, gwnum-based), and from Jean's development page you can get the not-yet-complete FFTW version of the code. (The files to download from there are llrpsrc.zip and llrpisrc.zip; they both seem to be based on FFTW, but I'm not sure which the more complete version or what the differences are between them.) I'm not sure just how close the FFTW version of LLR is to being ready for actual use. It may not be as yet suitable for a direct conversion to CUFFTW. What I was thinking was that the easiest route would be to take the existing MacLucasFFTW CUDA application, and apply the LL>LLR algorithm modifications (relatively minor as they are) to that. Of course, you're more familiar with your code and the algorithms than I, so I may not be fully understanding the extent of the modifications here. I do know, though, that Jean's LLR was based directly on George Woltman's Prime95 LL testing program; so it definitely is possible to take an existing LL program and convert it to LLR. Thanks, Max
Last fiddled with by mdettweiler on 2010-07-27 at 03:41 |
|
|
|
|
|
|
#211 | |
|
May 2010
499 Posts |
Quote:
If a mid-range GPU can outperform a whole bunch of high end quad cores by a large margin, it's likely that those without good GPUs would lose interest and quit. It would be nearly impossible to get onto the top 5000 list without a good GPU, and available LLR ranges may have ridiculously long testing times (since the low n ranges would all have been completed by GPUs). If that were to happen, it would potentially drive away many contributors, leading to a sharp drop in output if the small number of people who have a GPU farm lose interest. The way things are now, most of the major projects have enough participants that a project wouldn't be significantly affected if some people leave. A similar example can be seen in the Folding@home project. In 2008, Playstation 3's were, for the first time, able to contribute to that project. IIRC, Folding@home reached 3 (native) PetaFlops in summer 2008, 4 PetaFlops in fall 2008, and 5 PetaFlops in early 2009. The popularity of Playstation 3's went down shortly after, and today, Folding@home is back below 3 PetaFlops. It is too early to tell how much further that project's total processing power will decline. Be careful what you wish for... |
|
|
|
|
|
|
#212 | ||
|
May 2010
1111100112 Posts |
Just to point out one more thing:
Operation Billion Digits has been around for several years, and was making steady progress until GPUs started contributing a couple of weeks ago. While there is quite a large jump in progress now, it has come at a price: it may no longer be possible for slower machines to contribute in any meaningful way. From: http://www.mersenneforum.org/showpos...&postcount=402 Quote:
http://www.mersenneforum.org/showpos...66&postcount=2 Quote:
|
||
|
|
|
|
|
#213 |
|
A Sunny Moo
Aug 2007
USA (GMT-5)
3·2,083 Posts |
@Oddball: yeah, good points. What we'll probably do if we can get a GPU LLR application at NPLB is request that people only use it for certain ranges, sort of like what OBD is doing; the ranges that I'm thinking it would be ideal in are the 11th Drive (relatively small tests near the bottom of the top-5000 list--that threshold is being moved up rather quickly by PrimeGrid's hordes of computers anyway, so adding some GPUs at NPLB would only help us keep up better), and on the other end of the spectrum with our k=300-400 mini-drive, which covers n>1M tests searching for megabit primes. The primes should be sufficiently far and few from that search that GPUs' speeding up their discovery shouldn't have a large impact on the top-5000. Additionally, we have tons of sub-top-5000 search space that needs to be filled in for the purpose of completeness, and GPUs would be great at plowing through that stuff (which is rather unattractive to many participants due to its lack of particularly tangible returns).
Now if PrimeGrid got wind of the GPU LLR app and started utilizing it via BOINC on their huge Proth efforts, *then* we might have a problem. Due to their already immense firepower they are the primary driving force behind the upward motion of the top-5000 threshold, so adding tons of GPUs to that would make everything go completely haywire.That said, as much as we'd like for older CPUs to still be able to contribute meaningfully, since after all these projects are meant to be fun, we also don't want to lose sight of our overall goal of extending the contiguously-searched blocks of k and n for Riesel primes as far as possible. That is, surely, the main reason why this is all worth doing: the more of a search space we cover, the more data is available to researchers who can hopefully, eventually, find some clues to why prime numbers are where they are. So we don't want to hold back progress (the end) for the express purpose of making it easier for anybody their very own top-5000 prime even with modest hardware (which is definitely a good thing to have, but nonetheless is only the means to an end). I think as long as we keep GPUs primarily limited to the search regions where they can be most useful with the least adverse effects on the dynamics of prime searching (such as I described above), we should be able to maximize their overall net contribution to the prime search world. Last fiddled with by mdettweiler on 2010-07-27 at 05:27 |
|
|
|
|
|
#214 |
|
Jul 2009
Tokyo
11428 Posts |
Thank you, mdettweiler
I read source 10 minits every day before goto bed, Go to Sleep Fast.
|
|
|
|
|
|
#215 | |
|
Banned
"Luigi"
Aug 2002
Team Italia
10010110100112 Posts |
Quote:
Luigi |
|
|
|
|
|
|
#216 |
|
"Mark"
Apr 2003
Between here and the
22×7×227 Posts |
What about people burning coal to keep their old PIIs and PIIIs running on projects? It is their choice, but using these old computers requires an inordinate amount of power for what they are capable of. I wouldn't be sad to see many of those old computers get recycled.
Another unintended consequence of people switching work over to GPUs is that it could ultimately hurt Intel's dominance. Why would someone want to pay hundreds to buy a new computer when plopping in a new graphics card gives them a lot more bang for the buck? |
|
|
|
|
|
#217 | |
|
May 2010
49910 Posts |
Quote:
http://ark.intel.com/Product.aspx?id=27555 while high end graphics cards consume several hundred watts. |
|
|
|
|
|
|
#218 | |
|
Jun 2003
10011110111112 Posts |
Quote:
2. It is not just the chip's power consumption, but the whole system's power consumption. And in that respect, the ratios would be much smaller. 3. But 1&2 are not even the real points. If the computation that a P3 performs in a year can be accomplished by a Graphics card in a day, which would you use? "for what they are capable of" is the key. And newer technology will beat the crap out of older technology, in that respect. |
|
|
|
|
|
|
#219 |
|
May 2010
499 Posts |
That argument only applies to projects with a fixed end date (17 or bust, for example). For open ended projects like GIMPS, someone with a pentium III would consume 1 unit of coal each day. But if that person were to get a GPU instead, his consumption would end up being something like 10 units of coal per day.
|
|
|
|
|
|
#220 | |
|
Jun 2003
117378 Posts |
Quote:
Per-day consumption is an absurd measure of efficiency for distributed computing. Also, the person could just run the GPU 1/10th of a day, achieve the same per-day consumption of power as the P3, but much more computation. Would that be better? EDIT:- Or better yet, replace 10 P3 in the project with one GPU, and we're ahead in thruput without increasing power comsumption. Win-win! Last fiddled with by axn on 2010-07-27 at 19:33 |
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Don't DC/LL them with CudaLucas | LaurV | Data | 131 | 2017-05-02 18:41 |
| CUDALucas / cuFFT Performance on CUDA 7 / 7.5 / 8 | Brain | GPU Computing | 13 | 2016-02-19 15:53 |
| CUDALucas: which binary to use? | Karl M Johnson | GPU Computing | 15 | 2015-10-13 04:44 |
| settings for cudaLucas | fairsky | GPU Computing | 11 | 2013-11-03 02:08 |
| Trying to run CUDALucas on Windows 8 CP | Rodrigo | GPU Computing | 12 | 2012-03-07 23:20 |