![]() |
The first result is in from Gary's GTX 460:
[url=http://www.mersenne.org/report_exponent/?exp_lo=25652651&exp_hi=25652651&B1=Get+status]M25652651[/url] Amazing--a test that would have taken upward of 10 days on one core of a fast CPU took only a little over 2 days! :w00t: |
Thank you mdettweiler,
this news is good news :lol: |
And another one:
[url=http://www.mersenne.org/report_exponent/?exp_lo=24560839&exp_hi=24560839&B1=Get+status]M24560839[/url] |
Since it's limited to only power-of-2 FFTs, doing double checks around 35-36.5M is the most efficient. Just be sure it doesn't switch over to the 4M FFT.
|
[QUOTE=frmky;231042]Since it's limited to only power-of-2 FFTs, doing double checks around 35-36.5M is the most efficient. Just be sure it doesn't switch over to the 4M FFT.[/QUOTE]
I'll keep that in mind--thanks. Meanwhile, now that I've got MacLucasFFTW set up on this GPU and have confirmed that it's working right, I am available to help test an version of MacLucasFFTW modified to perform LLR tests. Can any of the CUDA gurus out there take a guess at what exactly would be involved in making such a modification? (I tried re-hardcoding the u[sub]0[/sub] value manually for a specific LLR test and feeding MacLucasFFTW the number's exponent, but it didn't work--the number is a known prime and it came up composite. I suppose this isn't exactly surprising, since I'm surely oversimplifying the matter by a long shot.) Alternatively, as Ken_g6 suggested a number of posts back, it might possibly be easier to just make a new program from scratch based on the FFTW-CUDA library that performs Fermat PRP tests. Again, I admit I'm entirely clueless as to how much work would be involved in this. But if it could be done, the result would be even more useful than a CUDA LLR program, since it could be used for any k*b^n+c (as opposed to an LLR test which only works for k*2^n-1). |
[QUOTE=MooMoo2;226932]Enough with the exaggerations on both sides (pro GPU and anti GPU).[/QUOTE]
I'm neutral on this issue, but will the pro-GPU side learn to show some patience? The issue has been beaten to death already. Yes, we know that some of you want GPUs for k*2^n+/-1 numbers, so quit repeating it every few weeks. Request: [URL]http://www.mersenneforum.org/showpost.php?p=218207&postcount=177[/URL] [quote]Your efforts in developing this LL application are greatly appreciated, and even more so if you can help in porting it to LLR![/quote] Repetition 1: [URL]http://www.mersenneforum.org/showpost.php?p=222992&postcount=208[/URL] [quote]would you by chance be willing to port this application to the LLR algorithm[/quote] Repetition 2: [URL]http://www.mersenneforum.org/showpost.php?p=226683&postcount=274[/URL] [quote]Any progress on this?[/quote] Repetition 3: [URL]http://www.mersenneforum.org/showpost.php?p=231044&postcount=324[/URL] [quote]I am available to help test an version of MacLucasFFTW modified to perform LLR tests. Can any of the CUDA gurus out there take a guess at what exactly would be involved in making such a modification?[/quote] Bugging doesn't pay off, but patience does. Learn to get some of it. |
[QUOTE=The Carnivore;231056]I'm neutral on this issue, but will the pro-GPU side learn to show some patience? The issue has been beaten to death already. Yes, we know that some of you want GPUs for k*2^n+/-1 numbers, so quit repeating it every few weeks.
Request: [URL]http://www.mersenneforum.org/showpost.php?p=218207&postcount=177[/URL] Repetition 1: [URL]http://www.mersenneforum.org/showpost.php?p=222992&postcount=208[/URL] Repetition 2: [URL]http://www.mersenneforum.org/showpost.php?p=226683&postcount=274[/URL] Repetition 3: [URL]http://www.mersenneforum.org/showpost.php?p=231044&postcount=324[/URL] Bugging doesn't pay off, but patience does. Learn to get some of it.[/QUOTE] Hey, keep it cool man...my most recent post was mainly to let people know that now I actually have a GPU with which to help test this stuff. It does change the situation a bit and thus it seemed to warrant a new post. |
The development of GPU clients for LLR is a terrible idea. It's like the Prisoner's Dilemma:
[URL="http://en.wikipedia.org/wiki/Prisoner%27s_dilemma"]http://en.wikipedia.org/wiki/Prisoner%27s_dilemma[/URL] Let's say there are two groups of people - those with good GPUs, and those without one. Let's call them Group A and Group B. At first, there's no GPU LLR client. Group A has 2500 primes on the top 5000 list, and Group B has 2500. One day, a GPU LLR client is released. Group A seizes the opportunity to grab a lead in the top 5000 list, and they put all of their GPUs to work. Group B sees that their primes are quickly beginning to get wiped off the top 5000 list, so they buy GPUs and run them to prevent this from happening. So now we're back to square one, and both groups each have 2500 primes on the top 5000 list, like before. But they are now worse off. Members of group B each have to spend hundreds of dollars to get good GPUs, and the power consumption of both groups have more than tripled. None of the crunchers are happy after seeing their electric bill go up, and they'll have to live with that each month until they retire from prime-finding DC projects. If that ever happens, the person we'll have to blame for that mess will be msft. |
[QUOTE=Historian;231062]The development of GPU clients for LLR is a terrible idea. It's like the Prisoner's Dilemma:[/QUOTE]
And this is why GIMPS should be restricted to 90 MHz Pentiums only. P90 years forever! |
[QUOTE=frmky;231063]And this is why GIMPS should be restricted to 90 MHz Pentiums only. P90 years forever![/QUOTE]
The transition from Pentiums to Core i7's was gradual and didn't have huge sudden jumps in power consumption. On the other hand, a possible transition from CPUs to GPUs will be very abrupt and have huge sudden jumps in power consumption. |
[QUOTE=Historian;231062]The development of GPU clients for LLR is a terrible idea. It's like the Prisoner's Dilemma:
[URL]http://en.wikipedia.org/wiki/Prisoner%27s_dilemma[/URL] [/QUOTE] The Prisoner's Dilemma analogy actually works better than I originally thought. From the occasionally reliable wikipedia: [quote] The prisoner's dilemma applies to the decision whether or not to use performance enhancing drugs in athletics. Given that the drugs have an approximately equal impact on each athlete, it is to all athletes' advantage that no athlete take the drugs (because of the side effects). However, if any one athlete takes the drugs, they will gain an advantage unless all the other athletes do the same. In that case, the advantage of taking the drugs is removed, but the disadvantages (side effects) remain[/quote] Now replace "athletics" with "prime searching projects", "athlete" with "cruncher", "drugs" with "GPUs", and "side effects" with "higher costs": [quote] The prisoner's dilemma applies to the decision whether or not to use GPUs in prime searching projects. Given that GPUs have an approximately equal impact on each cruncher, it is to all crunchers' advantage that no cruncher buys the GPUs (because of the higher costs). However, if any one cruncher buys the GPUs, they will gain an advantage unless all the other crunchers do the same. In that case, the advantage of buying the GPUs is removed, but the disadvantages (higher costs) remain[/quote] That matches almost exactly what I was describing in an earlier post. |
| All times are UTC. The time now is 22:50. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.