![]() |
|
|
#1 | |
|
Dec 2009
Peine, Germany
331 Posts |
Quote:
I run both CUDALucas and mfaktc and am quite happy with that. We live in a free world and it is commonly agreed that every GIMPS user can do what he wants. But, we don't need more TF! CPU TF also gets squeezed out. With regard to CUDALucas, I read that it spends most of its time in the CUDA libs. So there's not much hope to speed it up. The only idea I have is to restrict GHz-days for TF. Better ideas? (Starcraft 2 uses a so called bonus pool which runs empty if you play too often. Maybe we could "bonus" LLs? This would be useful/motivating for "small" GIMPS users.) |
|
|
|
|
|
|
#2 |
|
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3×29×83 Posts |
I do remember reading somewhere on this forum about a mathematician who said he was implementing an irrational-base DFFT for GPU's, demonstrating 30-50% improvement over CUDALucas. He also said he'd have something ready by summer 2011, but nobody's heard from him since spring. Others know better than I about this.
I can't say anything with regards to CUDALucas. I don't see what's wrong with doing the TF. We eliminate exponents more than twice as quickly, even using CUDALucas (as opposed to CPU's). Of course the others will need to be tested anyways, but the CPU's aren't bad at that, and they have to do something, don't they? |
|
|
|
|
|
#3 | |
|
Sep 2008
Kansas
59·67 Posts |
Quote:
|
|
|
|
|
|
|
#4 |
|
Dec 2010
Monticello
179510 Posts |
I remember that! What I'd say of CUDALucas versus mfaktc versus mfakto is:
1) Can't run CUDALucas on OpenCL...not yet, porting is an open project. 2) Can't run CUDALucas on some of the lower-end GPU cards. Don't think this matters in the grand scheme. 3) TF on GPUs is so much faster, that TF doesn't make sense on CPUs. And crediting for TF as if it were being done on CPUs also doesn't make sense...but it is what we have at the moment. The proper "credit inflation" for TF work would be re-scaled so that credit for an LL-D test from a GPU would be the same as for TF work, with a little compensation for the necessary CPU involvement -- because that is a CPU that is not doing LL or LL-D work feeding mfaktc. That is, equal work with a GPU should get equal credit, regardless of the work type. If TF credit is reduced, so be it.... Of course, there are those of us that think actually finding factors of mersenne numbers is a stronger, more emotionally satisfying proof of compositeness than a pair of LL tests...which is where the current system of TF credit is leading many to go right now. How strong this pull remains when things have improved by half a dozen or so bit levels remains to be seen. But I can tell you at 100M digits, an LL test is going to take a year...so it's worth a couple of days of TF. If you don't like it, go, polish up CUDALucas, or write up OpenCLucas, so these tools become available and easy to use. I simply have a commitment to mfaktc to keep first. |
|
|
|
|
|
#5 |
|
Romulan Interpreter
"name field"
Jun 2011
Thailand
41×251 Posts |
Seconding (almost) everything you said. Just to point out that this thread came as a split from the arguing started in the main thread with gpu272, from post 184 onwards. There is more arguing there about the subject. And yes, I believe that any user should be able to take what assignment he wants, and what he considers that is fun or worthy for him. No one can dictate me what I do with my computer and my money. Up to now, I have no complain, otherwise I won't be here anymore. Factoring is fun, and I remembered a lot of forgotten things and math in the last months since I am member here, from the time when I was only studying just to pass the exams, and do everything possible to forget all after the exam. And I also learned a lot of NEW things in the last months, from all of you, guys.
Last fiddled with by LaurV on 2011-11-24 at 04:00 |
|
|
|
|
|
#6 |
|
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
722110 Posts |
My two cents would be that the current credit system can be linearly related to an operations count, being that it's based of the capabilities of whatever-such-and-such a processor that George used to derive it. That is to say, LL and TF on CPU's get roughly the same number of FLOPS, which is emphatically not true for the same on GPU. Therefore I would not advocate a new credit system, only one that counts operations instead of (insert proc here) cycles count.
|
|
|
|
|
|
#7 |
|
Mar 2003
Melbourne
5·103 Posts |
In my mind, there's two different motivation methodologies - carrot and stick. There's been a lot of stick suggestions lately. (I'm not pointing to anyone individual - it seems pretty common).
If you have two methods - call them A & B. If you want to increase A relative to B, the better way is to increase motivation for A rather than decrease motivation for B. If one were to insist on decreasing motivation for B you risk option C - decrease the motivational threshold to be part of the project. If you have 2x separate ratings for TF by CPU and TF by GPU, you run a fraud risk - people reclassifying the GPU results to CPU and making mistakes in the process. The argument for reclassifying TF by GPU results could all be moot if next generation cards have LL GHz-days/day rates similar to TF. Won't that be fun :) -- Craig |
|
|
|
|
|
#8 |
|
"Lucan"
Dec 2006
England
2·3·13·83 Posts |
I feel fairly confident that I am loosely translating George here.
Oliver gets GPUs to TF 100x faster than CPUs. So do TF on GPUs and not CPUs to a few more bits. That should be "end of story" ATM. David Rant: Who the **** is sitting on 6326 55M TF assignments? Last fiddled with by davieddy on 2011-11-24 at 12:21 |
|
|
|
|
|
#9 |
|
Dec 2010
Monticello
111000000112 Posts |
Silly Davieddy.....
This is a large, distributed project, and most of our workers aren't actually paying any attention to this....so their CPUs are merrily doing TF..... Right now, we have a nearly inevitable distortion in the project, assuming GHz-days is your only motivation. It's a lot easier to get a GHz-day by TF on my GPU than by any other method. This was not the case on CPUs, so maximising results (LLs or factors found eliminating LL tests) per GHz-day was also maximising project progress. Not that good GPUs aren't decently hot at LL tests...just not quite as hot as they are as TF machines. When I finally get access to my high-end GPU again next week, I'll set it to more LL-Ds. in addition to the TF it does. Even super-hot TF can only eliminate so many LL tests, around 10%. The exponential wall means that returns diminish rapidly with increasing bitlevel. |
|
|
|
|
|
#10 | |
|
Nov 2011
Quebec, Canada
32 Posts |
Quote:
I personnaly left GIMPS a years ago because there's no way of using my newly acquired cuda cards, when using only my CPU require 2 1/2 to 3 months to complete A SINGLE LL test... Few weeks ago, i was very happy to have find new app, that are able to crunch for GIMPS using my 2 cuda cards. Now i'm a little frustrated to see that only TF worth the time invested. I started a LL with CUDALucas two weeks ago, and it seem that it will take another month to finish... A complete waste of power... YES it's more quick that doing the same using cpu, but... I'm sure the soft will improve with time, but now, a re-balancement of the ghz/days is the minimum to help every GIMPS donator TO CONSIDER doing LL test, that, as i said before, is the heart of GIMPS. So, if Prime95 read this little complaints, i hope something will be done to calm my consciousness...
|
|
|
|
|
|
|
#11 |
|
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
160658 Posts |
Keep in mind that those who are doing mfakt* are doing so a very little impact to what their LL throughput was before. I have four cores; nominally all four do LL. Currently, one of them does mfaktc, which (usually) finds a factor every couple of days; that eliminates exponents much more quickly than using the core for LL. I realize that we can't factor everything, but I'm using my GPU at relatively little cost to my LL throughput. mfakt* hurts CUADLucas throughput, yes, but I would argue that even if we all ran CUDALucas instead of mfakt*, the total increase in LL throughput would be very very slight, and I'd make a wager that our impact would be less than 1% overall. Clearly mfakt* has a much bigger impact of TF and eliminating exponents.
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| ivy bridge versus haswell | diep | Hardware | 29 | 2017-12-06 13:43 |
| mfaktc and CUDALucas side-by-side | TObject | GPU Computing | 2 | 2012-07-21 01:56 |
| Windows 32bit vs 64bit mfaktc/cudalucas | bcp19 | GPU Computing | 20 | 2012-03-11 01:24 |
| NTT transform at (AMD) GPU versus *lucas | diep | GPU Computing | 11 | 2011-05-11 20:27 |
| Head versus tail | R.D. Silverman | Lounge | 9 | 2008-12-16 14:28 |