mersenneforum.org How should we compare GPUs to CPUs?
 Register FAQ Search Today's Posts Mark Forums Read

 2011-11-29, 22:12 #1 chalsall If I May     "Chris Halsall" Sep 2002 Barbados 35×43 Posts How should we compare GPUs to CPUs? With regards to the Overall System Status report, a current example of which is: Code:  Factors Work Saved Average GHz/GPU Days per... Total GHz/GPU Days... Work Type Found P-1 LL/DC Factor Work Saved Factoring Saved DC TF 43 0 43 1.964 24.110 84.493 1,036.735 LL TF 148 87 296 4.055 181.016 600.269 26,790.486 P-1 35 70 57.800 176.849 2,023.006 6,189.743 Total 226 87 409 2,707.769 34,016.965 The question is, how should we compare the work of GPUs to that of CPUs so the comparisons are "fair"? A follow up question, how do we make the data easy to read and understand? Last fiddled with by chalsall on 2011-11-30 at 03:07
 2011-11-30, 01:24 #2 Dubslow Basketry That Evening!     "Bunslow the Bold" Jun 2011 40
2011-11-30, 01:52   #3
chalsall
If I May

"Chris Halsall"
Sep 2002

28D116 Posts

Quote:
 Originally Posted by Dubslow I'll just make suggestion: Perhaps divide P-1 credit by 2.5-3 so that we can compare GPU-Days to CPU-Days directly?
I saw that suggestion before, but didn't really understand why. Could you explain the rational in some more detail?

Quote:
 Originally Posted by Dubslow Also, what does "Average GHz/GPU Days per Work Saved" mean? And in the last column, "27,368.183" -- is that GHz-Days of LL tests saved, or CPU or GPU-Days of LL tests saved?
Answer to first question: last column divided by number of factors found.

Answer to second question: last column is the total amount of LL (and DC if appropriate) GHzDays time saved, plus the amount of P-1 GHzDays if (and only if) it actually was saved (read: a factor was found before P-1 had been done).

Perhaps it would help if I color coded the individual cells to show which were GPU days and which were GHz?

Last fiddled with by chalsall on 2011-11-30 at 03:11 Reason: Edited (slightly) for this new thread's context.

2011-11-30, 02:16   #4
Dubslow

"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

11100001101012 Posts

Quote:
 Originally Posted by chalsall I saw that suggestion before, but didn't really understand why. Could you explain the rational in some more detail?
We can compare GHz-Days, which makes a certain amount of sense. Or we could compare GPU-Days to CPU-Days, which is what I was suggesting. It doesn't make sense to me to compare GPU-Days with GHz-Days; saying this took 3 GPU-Days per factor where that was 50 GHz-Days per factor isn't a good comparison: to make it, I mentally figure out how many CPU-Days that 50GHzDays is anyways (by dividing that by 2.5-3). The other possible option is to take GHz-Days for TF, divide by 100 and then multiply by 2.5-3 to get a 'normalized GHz-Days'; I don't like it as much though.

In other words, to compare 3 GPU-Days to 50 GHz-Days, I make the conversion anyways, so why not put the conversion into the chart?

Hopefully that makes sense...
Quote:
 Originally Posted by chalsall Answer to first question: last column divided by number of factors found.
Ah, okay. Perhaps a title like "Work saved per factor" or simplify the current title to "Average GHz-Days per Work unit saved". (See below about titles)
Quote:
 Originally Posted by chalsall Perhaps it would help if I color coded the individual cells to show which were GPU days and which were GHz?
Yes please, color code would make it much nicer; that, or put the units into each cell and have simpler titles up top.

Last fiddled with by chalsall on 2011-11-30 at 03:13 Reason: Edited (slightly) for this new thread's context.

 2011-11-30, 02:48 #5 Dubslow Basketry That Evening!     "Bunslow the Bold" Jun 2011 40

Overall System Progress

Work TypeFactors FoundP-1 Tests SavedLL/DC Tests SavedAverage Work Done per FactorAverage Work Saved per FactorTotal Factoring Work DoneTotal Work Saved
DC TF 440442.093 GPU-Days24.145 GHz-Days92.093 GPU-Days1,062.381 GHz-Days
LL TF 151903024.101 GPU-Days181.246 GHz-Days619.307 GPU-Days27,368.183 GHz-Days
P-135 7058.842/2.6 CPU-Days176.849 GHz-Days2,059.472/2.6 CPU-Days6,189.743 GHz-Days
Total23090416  92*100+619*100+2059 GHz-Days34,620.309 GHz-Days

Work completed listed in GPU-Days and CPU-Days. Work saved listed in GHz-Days.
1 GPU-Day = 100 GHz-Days
1 CPU-Day = 2.6 GHz-Days

 (I'm not really sure where all the borders and cell colors came from, but then this is just an example. I did modify this from what your page showed.) Last fiddled with by Dubslow on 2011-11-30 at 02:48
2011-11-30, 03:28   #6
chalsall
If I May

"Chris Halsall"
Sep 2002

35×43 Posts

Quote:
 Originally Posted by Dubslow We can compare GHz-Days, which makes a certain amount of sense. Or we could compare GPU-Days to CPU-Days, which is what I was suggesting. It doesn't make sense to me to compare GPU-Days with GHz-Days; saying this took 3 GPU-Days per factor where that was 50 GHz-Days per factor isn't a good comparison: to make it, I mentally figure out how many CPU-Days that 50GHzDays is anyways (by dividing that by 2.5-3). The other possible option is to take GHz-Days for TF, divide by 100 and then multiply by 2.5-3 to get a 'normalized GHz-Days'; I don't like it as much though.
Ah, I think I understand... You're trying to get to "wall-clock" Days for each category, and the 2.5-3.0 multiplier / divider (as appropriate) is a reflection of today's modern CPUs being at that many GHz? Is this correct?

One problem I see with this solution is that there are still several very old CPUs participating in GIMPS. Also, I would like to try find a formula which we all can agree on to get a "GPU" metric which can fairly be compared to (CPU) GHzDays. The reason is this is what PrimeNet uses to award credit, and (thanks to James) is what my system calculates for work saved.

As an example, earlier today this factor was found:

Code:
+----------+------------------------+----------+---------+-----------+-----------+-----------+
| Exponent | Factor                 | BitLevel | GHzDays | GHzDaysLL | GHzDaysDC | GHzDaysP1 |
+----------+------------------------+----------+---------+-----------+-----------+-----------+
| 51986681 | 1572812258497661927719 | 70.41383 | 4.59978 | 99.593002 | 99.593002 | 3.6537394 |
+----------+------------------------+----------+---------+-----------+----------+------------+`
Of the "GHzDays" fields were calculated as PrimeNet would have calculated them (thanks to James for providing the needed code). The first one is the amount of GHzDays credit awarded for the find, the latter three is the amount of CPU work saved by the find.

So I guess my fundamental question is, do you (and others) think that dividing the calculated GHzDays by 100 to get "GPUDays" reasonable when the work is done by a GPU to then be able to compare to work done by CPUs?

 2011-11-30, 03:51 #7 Dubslow Basketry That Evening!     "Bunslow the Bold" Jun 2011 40
 2011-11-30, 04:05 #8 LaurV Romulan Interpreter     "name field" Jun 2011 Thailand 2·17·293 Posts We could make a poll-like thread. For me the formula would fit reasonably. With the current GPU/CPU load, I can do about 60 mfaktc bitlevels per day on DC-front, for a total of a little more then 100 GHz-days in 24 hours, and I need about 24 hours to DC one exponent of the same size, for which Gimps is giving me 27-29GHz-days of credit.
 2011-11-30, 04:20 #9 Dubslow Basketry That Evening!     "Bunslow the Bold" Jun 2011 40
 2011-11-30, 04:28 #10 LaurV Romulan Interpreter     "name field" Jun 2011 Thailand 2·17·293 Posts your formula. like dividing by 100 and multiplying by 2.6, or some values around. The poll is more intended for some older-GPU, for which I have no idea how the numbers combine. And it can not really be a poll, that is why I said "poll-like", because checking boxes is not enough. People must comment, write down numbers. This current thread is ok, in fact we do not need a new one. I think that every gpu272 user, or every guy who has a GPU, should post three rows like: 1. I can do xxxxx GHz-days per day (per 24 hours, xx hours, whatever) of TF using mfaktc/mfakto/else/others. 2. I can complete one (two, many, how many?) DC-front assignments (that is 25-29M expos, else?) in XXX hours on my GPU using CudaLucas/else/others. 3. I can complete one (two, many, how many?) LL-front assignments (that is 45-60M, else?) in XXX hours on my GPU using CudaLucas/else/others. Optional line: 4. I have GPU xxxx running on yyyy (OS name, 32/64 bits), CPU xxxx (relevant for mfaktX, who is using also CPU power, therefore cutting from the work CPU can do, P95 P-1, etc). Then we can see how they compare**. It could be also a good "reference" benchmark thread, for the people who try to set up a new GPU. When I did that I always wondered myself "are my numbers (ms/iteration) good enough for my hardware? Should I use different setting for the ini files? Can I improve my output if I adjust this and that?" etc. It should be very useful to have a "benchmark" thread. One could know at once that he is doing something wrong if he sees his numbers are half of what other people with compatible hardware get. edit **: compare (1) against (2), and then (1) against (3). For CudaLucas (2) and (3) are quite disproportional. One LL-front test at 50M should theoretically take maximum 8 times more time then one DC test, even if you use "school-multiplication". That is because you would need a double number of iterations, and each iteration has double size, involving at most 4 times (school grade) multiplications. Under no circumstances a LL test should take more then 8 times a DC test. However I have heard people saying that a DC test needs 30 hours and a LL test needs 300 hours, for some older GPU. Obviously, that kind of GPU has some bottleneck somewhere, when used for bigger numbers. On the other side, LL tests should take about 5 times more time then DC tests, with the optimized FFT. That is not always the case, I have heard people talking about 3-4 times longer for LL compared to DC (like 5-6-7 days for a LL, but two days for a DC). Obviously here their GPU has some "waiting/idle" periods for small numbers, they are not enough busy, and they should play with program's settings. Last fiddled with by LaurV on 2011-11-30 at 04:57
2011-11-30, 05:33   #11
chalsall
If I May

"Chris Halsall"
Sep 2002

35×43 Posts

Quote:
 Originally Posted by Dubslow The dividing by 100 came from the fact that the 'average' GPU gets around 100 GHz-days credit per day; hence why I suggested divide by the 'avergage' cpu throughput. The other suggestion was GHz-Days/100*3 the 'normalized credit' to compare them
I can be slow some times... I understand your point now.

I have updated the page with the "GPUDays = GHzDays / 100 * 3" formula where appropriate, and added asterisks to indicate which fields are GPUDays.

We can tweak the values for "100" and "3" once we get more feedback.

I have also modified the table headers a bit to be (hopefully) clearer.

(And, it's now 0130 Barbados Time. Bed calls... Thank goodness tomorrow is a holiday (Independence Day) here in Bim....)

 Similar Threads Thread Thread Starter Forum Replies Last Post Mark Rose Data 4 2016-06-17 14:38 chalsall GPU to 72 56 2014-04-24 02:36 petrw1 GPU Computing 0 2013-01-06 03:23 Uncwilly LMH > 100M 60 2012-05-15 08:37 zanmato Software 12 2012-04-18 14:56

All times are UTC. The time now is 22:34.

Sat May 21 22:34:52 UTC 2022 up 37 days, 20:36, 0 users, load averages: 1.60, 1.48, 1.43