20111129, 22:12  #1 
If I May
"Chris Halsall"
Sep 2002
Barbados
5×23×83 Posts 
How should we compare GPUs to CPUs?
With regards to the Overall System Status report, a current example of which is:
Code:
Factors Work Saved Average GHz/GPU Days per... Total GHz/GPU Days... Work Type Found P1 LL/DC Factor Work Saved Factoring Saved DC TF 43 0 43 1.964 24.110 84.493 1,036.735 LL TF 148 87 296 4.055 181.016 600.269 26,790.486 P1 35 70 57.800 176.849 2,023.006 6,189.743 Total 226 87 409 2,707.769 34,016.965 A follow up question, how do we make the data easy to read and understand? Last fiddled with by chalsall on 20111130 at 03:07 
20111130, 01:24  #2 
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 89<O<88
7221_{10} Posts 
I'll just make suggestion: Perhaps divide P1 credit by 2.53 so that we can compare GPUDays to CPUDays directly?
Also, what does "Average GHz/GPU Days per Work Saved" mean? And in the last column, "27,368.183"  is that GHzDays of LL tests saved, or CPU or GPUDays of LL tests saved? Last fiddled with by chalsall on 20111130 at 03:12 Reason: Edited (slightly) for this new thread's context. 
20111130, 01:52  #3  
If I May
"Chris Halsall"
Sep 2002
Barbados
5×23×83 Posts 
Quote:
Quote:
Answer to second question: last column is the total amount of LL (and DC if appropriate) GHzDays time saved, plus the amount of P1 GHzDays if (and only if) it actually was saved (read: a factor was found before P1 had been done). Perhaps it would help if I color coded the individual cells to show which were GPU days and which were GHz? Last fiddled with by chalsall on 20111130 at 03:11 Reason: Edited (slightly) for this new thread's context. 

20111130, 02:16  #4  
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 89<O<88
1C35_{16} Posts 
Quote:
In other words, to compare 3 GPUDays to 50 GHzDays, I make the conversion anyways, so why not put the conversion into the chart? Hopefully that makes sense... Quote:
Yes please, color code would make it much nicer; that, or put the units into each cell and have simpler titles up top. Last fiddled with by chalsall on 20111130 at 03:13 Reason: Edited (slightly) for this new thread's context. 

20111130, 02:48  #5 
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 89<O<88
7221_{10} Posts 
Maybe something like this:
Code:
<html> <h1>Overall System Progress</h1><p><table class="factdepth" width="100%"> <tr><th>Work Type</th><th>Factors Found</th><th>P1 Tests Saved</th><th>LL/DC Tests Saved</th><th>Average Work Done per Factor</th><th>Average Work Saved per Factor</th><th>Total Factoring Work Done</th><th>Total Work Saved</th></tr> <tr><td nowrap class="fdr"><b>DC TF </b></td><td>44</td><td>0</td><td>44</td><td>2.093 GPUDays</td><td>24.145 GHzDays</td><td class="fdr">92.093 GPUDays</td><td class="fds">1,062.381 GHzDays</td></tr> <tr><td nowrap class="fdr"><b>LL TF </b></td><td>151</td><td>90</td><td>302</td><td>4.101 GPUDays</td><td>181.246 GHzDays</td><td class="fdr">619.307 GPUDays</td><td class="fds">27,368.183 GHzDays</td></tr> <tr><td nowrap class="fdr"><b>P1</b></td><td>35</td><td> </td><td>70</td><td>58.842/2.6 CPUDays</td><td>176.849 GHzDays</td><td class="fdr">2,059.472/2.6 CPUDays</td><td class="fds">6,189.743 GHzDays</td></tr> <tr class="fdr"><th>Total</th><td>230</td><td>90</td><td>416</td><td> </td><td> </td><td>92*100+619*100+2059 GHzDays</td><td>34,620.309 GHzDays</td></tr> </table><p>Work completed listed in GPUDays and CPUDays. Work saved listed in GHzDays. <br>1 GPUDay = 100 GHzDays <br>1 CPUDay = 2.6 GHzDays</p> </table></td></tr> </table> </html> Last fiddled with by Dubslow on 20111130 at 02:48 
20111130, 03:28  #6  
If I May
"Chris Halsall"
Sep 2002
Barbados
5×23×83 Posts 
Quote:
One problem I see with this solution is that there are still several very old CPUs participating in GIMPS. Also, I would like to try find a formula which we all can agree on to get a "GPU" metric which can fairly be compared to (CPU) GHzDays. The reason is this is what PrimeNet uses to award credit, and (thanks to James) is what my system calculates for work saved. As an example, earlier today this factor was found: Code:
++++++++  Exponent  Factor  BitLevel  GHzDays  GHzDaysLL  GHzDaysDC  GHzDaysP1  ++++++++  51986681  1572812258497661927719  70.41383  4.59978  99.593002  99.593002  3.6537394  ++++++++ So I guess my fundamental question is, do you (and others) think that dividing the calculated GHzDays by 100 to get "GPUDays" reasonable when the work is done by a GPU to then be able to compare to work done by CPUs? 

20111130, 03:51  #7 
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 89<O<88
3×29×83 Posts 
The dividing by 100 came from the fact that the 'average' GPU gets around 100 GHzdays credit per day; hence why I suggested divide by the 'avergage' cpu throughput. The other suggestion was GHzDays/100*3 the 'normalized credit' to compare them

20111130, 04:05  #8 
Romulan Interpreter
Jun 2011
Thailand
2·3·5·313 Posts 
We could make a polllike thread. For me the formula would fit reasonably. With the current GPU/CPU load, I can do about 60 mfaktc bitlevels per day on DCfront, for a total of a little more then 100 GHzdays in 24 hours, and I need about 24 hours to DC one exponent of the same size, for which Gimps is giving me 2729GHzdays of credit.

20111130, 04:20  #9 
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 89<O<88
3·29·83 Posts 
Which formula? Though the poll is a good idea.
Last fiddled with by Dubslow on 20111130 at 04:21 
20111130, 04:28  #10 
Romulan Interpreter
Jun 2011
Thailand
2×3×5×313 Posts 
your formula. like dividing by 100 and multiplying by 2.6, or some values around. The poll is more intended for some olderGPU, for which I have no idea how the numbers combine. And it can not really be a poll, that is why I said "polllike", because checking boxes is not enough. People must comment, write down numbers. This current thread is ok, in fact we do not need a new one.
I think that every gpu272 user, or every guy who has a GPU, should post three rows like: 1. I can do xxxxx GHzdays per day (per 24 hours, xx hours, whatever) of TF using mfaktc/mfakto/else/others. 2. I can complete one (two, many, how many?) DCfront assignments (that is 2529M expos, else?) in XXX hours on my GPU using CudaLucas/else/others. 3. I can complete one (two, many, how many?) LLfront assignments (that is 4560M, else?) in XXX hours on my GPU using CudaLucas/else/others. Optional line: 4. I have GPU xxxx running on yyyy (OS name, 32/64 bits), CPU xxxx (relevant for mfaktX, who is using also CPU power, therefore cutting from the work CPU can do, P95 P1, etc). Then we can see how they compare**. It could be also a good "reference" benchmark thread, for the people who try to set up a new GPU. When I did that I always wondered myself "are my numbers (ms/iteration) good enough for my hardware? Should I use different setting for the ini files? Can I improve my output if I adjust this and that?" etc. It should be very useful to have a "benchmark" thread. One could know at once that he is doing something wrong if he sees his numbers are half of what other people with compatible hardware get. edit **: compare (1) against (2), and then (1) against (3). For CudaLucas (2) and (3) are quite disproportional. One LLfront test at 50M should theoretically take maximum 8 times more time then one DC test, even if you use "schoolmultiplication". That is because you would need a double number of iterations, and each iteration has double size, involving at most 4 times (school grade) multiplications. Under no circumstances a LL test should take more then 8 times a DC test. However I have heard people saying that a DC test needs 30 hours and a LL test needs 300 hours, for some older GPU. Obviously, that kind of GPU has some bottleneck somewhere, when used for bigger numbers. On the other side, LL tests should take about 5 times more time then DC tests, with the optimized FFT. That is not always the case, I have heard people talking about 34 times longer for LL compared to DC (like 567 days for a LL, but two days for a DC). Obviously here their GPU has some "waiting/idle" periods for small numbers, they are not enough busy, and they should play with program's settings. Last fiddled with by LaurV on 20111130 at 04:57 
20111130, 05:33  #11  
If I May
"Chris Halsall"
Sep 2002
Barbados
5·23·83 Posts 
Quote:
I have updated the page with the "GPUDays = GHzDays / 100 * 3" formula where appropriate, and added asterisks to indicate which fields are GPUDays. We can tweak the values for "100" and "3" once we get more feedback. I have also modified the table headers a bit to be (hopefully) clearer. (And, it's now 0130 Barbados Time. Bed calls... Thank goodness tomorrow is a holiday (Independence Day) here in Bim....) 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
What percentage of CPUs/GPUs have done a double check?  Mark Rose  Data  4  20160617 14:38 
Anyone using GPUs to do DC, LL or P1 work?  chalsall  GPU to 72  56  20140424 02:36 
GPUs impact on TF  petrw1  GPU Computing  0  20130106 03:23 
LMH Factoring on GPUs  Uncwilly  LMH > 100M  60  20120515 08:37 
Compare interim files with different start shifts?  zanmato  Software  12  20120418 14:56 