![]() |
How should we compare GPUs to CPUs?
With regards to the [URL="http://gpu.mersenne.info/reports/overall/"]Overall System Status[/URL] report, a current example of which is:
[CODE] Factors Work Saved Average GHz/GPU Days per... Total GHz/GPU Days... [U]Work Type Found P-1 LL/DC Factor Work Saved Factoring Saved[/U] DC TF 43 0 43 1.964 24.110 84.493 1,036.735 LL TF 148 87 296 4.055 181.016 600.269 26,790.486 [U]P-1 35 70 57.800 176.849 2,023.006 6,189.743[/U] [B]Total 226 87 409 2,707.769 34,016.965[/B][/CODE] The question is, how should we compare the work of GPUs to that of CPUs so the comparisons are "fair"? A follow up question, how do we make the data easy to read and understand? |
I'll just make suggestion: Perhaps divide P-1 credit by 2.5-3 so that we can compare GPU-Days to CPU-Days directly?
Also, what does "Average GHz/GPU Days per Work Saved" mean? And in the last column, "27,368.183" -- is that GHz-Days of LL tests saved, or CPU or GPU-Days of LL tests saved? |
[QUOTE=Dubslow;280454]I'll just make suggestion: Perhaps divide P-1 credit by 2.5-3 so that we can compare GPU-Days to CPU-Days directly?[/QUOTE]
I saw that suggestion before, but didn't really understand why. Could you explain the rational in some more detail? [QUOTE=Dubslow;280454]Also, what does "Average GHz/GPU Days per Work Saved" mean? And in the last column, "27,368.183" -- is that GHz-Days of LL tests saved, or CPU or GPU-Days of LL tests saved?[/QUOTE] Answer to first question: last column divided by number of factors found. Answer to second question: last column is the total amount of LL (and DC if appropriate) [U]GHzDays[/U] time saved, plus the amount of P-1 [U]GHzDays[/U] if (and only if) it actually was saved (read: a factor was found before P-1 had been done). Perhaps it would help if I color coded the individual cells to show which were GPU days and which were GHz? |
[QUOTE=chalsall;280456]
I saw that suggestion before, but didn't really understand why. Could you explain the rational in some more detail? [/quote] We can compare GHz-Days, which makes a certain amount of sense. Or we could compare GPU-Days to CPU-Days, which is what I was suggesting. It doesn't make sense to me to compare GPU-Days with GHz-Days; saying this took 3 GPU-Days per factor where that was 50 GHz-Days per factor isn't a good comparison: to make it, I mentally figure out how many CPU-Days that 50GHzDays is anyways (by dividing that by 2.5-3). The other possible option is to take GHz-Days for TF, divide by 100 and then multiply by 2.5-3 to get a 'normalized GHz-Days'; I don't like it as much though. In other words, to compare 3 GPU-Days to 50 GHz-Days, I make the conversion anyways, so why not put the conversion into the chart? Hopefully that makes sense... [QUOTE=chalsall;280456] Answer to first question: last column divided by number of factors found. [/quote] Ah, okay. Perhaps a title like "Work saved per factor" or simplify the current title to "Average GHz-Days per Work unit saved". (See below about titles) [QUOTE=chalsall;280456] Perhaps it would help if I color coded the individual cells to show which were GPU days and which were GHz?[/QUOTE] Yes please, color code would make it much nicer; that, or put the units into each cell and have simpler titles up top. |
Maybe something like this:
[code]<html> <h1>Overall System Progress</h1><p><table class="factdepth" width="100%"> <tr><th>Work Type</th><th>Factors Found</th><th>P-1 Tests Saved</th><th>LL/DC Tests Saved</th><th>Average Work Done per Factor</th><th>Average Work Saved per Factor</th><th>Total Factoring Work Done</th><th>Total Work Saved</th></tr> <tr><td nowrap class="fdr"><b>DC TF </b></td><td>44</td><td>0</td><td>44</td><td>2.093 GPU-Days</td><td>24.145 GHz-Days</td><td class="fdr">92.093 GPU-Days</td><td class="fds">1,062.381 GHz-Days</td></tr> <tr><td nowrap class="fdr"><b>LL TF </b></td><td>151</td><td>90</td><td>302</td><td>4.101 GPU-Days</td><td>181.246 GHz-Days</td><td class="fdr">619.307 GPU-Days</td><td class="fds">27,368.183 GHz-Days</td></tr> <tr><td nowrap class="fdr"><b>P-1</b></td><td>35</td><td> </td><td>70</td><td>58.842/2.6 CPU-Days</td><td>176.849 GHz-Days</td><td class="fdr">2,059.472/2.6 CPU-Days</td><td class="fds">6,189.743 GHz-Days</td></tr> <tr class="fdr"><th>Total</th><td>230</td><td>90</td><td>416</td><td> </td><td> </td><td>92*100+619*100+2059 GHz-Days</td><td>34,620.309 GHz-Days</td></tr> </table><p>Work completed listed in GPU-Days and CPU-Days. Work saved listed in GHz-Days. <br>1 GPU-Day = 100 GHz-Days <br>1 CPU-Day = 2.6 GHz-Days</p> </table></td></tr> </table> </html>[/code] (I'm not really sure where all the borders and cell colors came from, but then this is just an example. I did modify this from what your page showed.) |
[QUOTE=Dubslow;280457]We can compare GHz-Days, which makes a certain amount of sense. Or we could compare GPU-Days to CPU-Days, which is what I was suggesting. It doesn't make sense to me to compare GPU-Days with GHz-Days; saying this took 3 GPU-Days per factor where that was 50 GHz-Days per factor isn't a good comparison: to make it, I mentally figure out how many CPU-Days that 50GHzDays is anyways (by dividing that by 2.5-3). The other possible option is to take GHz-Days for TF, divide by 100 and then multiply by 2.5-3 to get a 'normalized GHz-Days'; I don't like it as much though.[/QUOTE]
Ah, I think I understand... You're trying to get to "wall-clock" Days for each category, and the 2.5-3.0 multiplier / divider (as appropriate) is a reflection of today's modern CPUs being at that many GHz? Is this correct? One problem I see with this solution is that there are still several very old CPUs participating in GIMPS. Also, I would like to try find a formula which we all can agree on to get a "GPU" metric which can fairly be compared to (CPU) GHzDays. The reason is this is what PrimeNet uses to award credit, and (thanks to James) is what my system calculates for work saved. As an example, earlier today this factor was found: [CODE]+----------+------------------------+----------+---------+-----------+-----------+-----------+ | Exponent | Factor | BitLevel | GHzDays | GHzDaysLL | GHzDaysDC | GHzDaysP1 | +----------+------------------------+----------+---------+-----------+-----------+-----------+ | 51986681 | 1572812258497661927719 | 70.41383 | 4.59978 | 99.593002 | 99.593002 | 3.6537394 | +----------+------------------------+----------+---------+-----------+----------+------------+[/CODE] Of the "GHzDays" fields were calculated as PrimeNet would have calculated them (thanks to [URL="http://mersenne-aries.sili.net/credit.php"]James for providing the needed code[/URL]). The first one is the amount of GHzDays credit awarded for the find, the latter three is the amount of CPU work saved by the find. So I guess my fundamental question is, do you (and others) think that dividing the calculated GHzDays by 100 to get "GPUDays" reasonable when the work is done by a GPU to then be able to compare to work done by CPUs? |
The dividing by 100 came from the fact that the 'average' GPU gets around 100 GHz-days credit per day; hence why I suggested divide by the 'avergage' cpu throughput. The other suggestion was GHz-Days/100*3 the 'normalized credit' to compare them
|
We could make a poll-like thread. For me the formula would fit reasonably. With the current GPU/CPU load, I can do about 60 mfaktc bitlevels per day on DC-front, for a total of a little more then 100 GHz-days in 24 hours, and I need about 24 hours to DC one exponent of the same size, for which Gimps is giving me 27-29GHz-days of credit.
|
Which formula? Though the poll is a good idea.
|
your formula. like dividing by 100 and multiplying by 2.6, or some values around. The poll is more intended for some older-GPU, for which I have no idea how the numbers combine. And it can not really be a poll, that is why I said "poll-like", because checking boxes is not enough. People must comment, write down numbers. This current thread is ok, in fact we do not need a new one.
I think that every gpu272 user, or every guy who has a GPU, should post three rows like: 1. I can do xxxxx GHz-days per day (per 24 hours, xx hours, whatever) of TF using mfaktc/mfakto/else/others. 2. I can complete one (two, many, how many?) DC-front assignments (that is 25-29M expos, else?) in XXX hours on my GPU using CudaLucas/else/others. 3. I can complete one (two, many, how many?) LL-front assignments (that is 45-60M, else?) in XXX hours on my GPU using CudaLucas/else/others. Optional line: 4. I have GPU xxxx running on yyyy (OS name, 32/64 bits), CPU xxxx (relevant for mfaktX, who is using also CPU power, therefore cutting from the work CPU can do, P95 P-1, etc). Then we can see how they compare**. It could be also a good "reference" benchmark thread, for the people who try to set up a new GPU. When I did that I always wondered myself "are my numbers (ms/iteration) good enough for my hardware? Should I use different setting for the ini files? Can I improve my output if I adjust this and that?" etc. It should be very useful to have a "benchmark" thread. One could know at once that he is doing something wrong if he sees his numbers are half of what other people with compatible hardware get. edit **: compare (1) against (2), and then (1) against (3). For CudaLucas (2) and (3) are quite disproportional. One LL-front test at 50M should theoretically take maximum 8 times more time then one DC test, even if you use "school-multiplication". That is because you would need a double number of iterations, and each iteration has double size, involving at most 4 times (school grade) multiplications. Under no circumstances a LL test should take more then 8 times a DC test. However I have heard people saying that a DC test needs 30 hours and a LL test needs 300 hours, for some older GPU. Obviously, that kind of GPU has some bottleneck somewhere, when used for bigger numbers. On the other side, LL tests should take about 5 times more time then DC tests, with the optimized FFT. That is not always the case, I have heard people talking about 3-4 times longer for LL compared to DC (like 5-6-7 days for a LL, but two days for a DC). Obviously here their GPU has some "waiting/idle" periods for small numbers, they are not enough busy, and they should play with program's settings. |
[QUOTE=Dubslow;280477]The dividing by 100 came from the fact that the 'average' GPU gets around 100 GHz-days credit per day; hence why I suggested divide by the 'avergage' cpu throughput. The other suggestion was GHz-Days/100*3 the 'normalized credit' to compare them[/QUOTE]
I can be slow some times... I understand your point now. I have updated the page with the "GPUDays = GHzDays / 100 * 3" formula where appropriate, and added asterisks to indicate which fields are GPUDays. We can tweak the values for "100" and "3" once we get more feedback. I have also modified the table headers a bit to be (hopefully) clearer. (And, it's now 0130 Barbados Time. Bed calls... Thank goodness tomorrow is a holiday (Independence Day) here in Bim....) |
| All times are UTC. The time now is 14:10. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.