![]() |
![]() |
#1 |
If I May
"Chris Halsall"
Sep 2002
Barbados
35×43 Posts |
![]()
With regards to the Overall System Status report, a current example of which is:
Code:
Factors Work Saved Average GHz/GPU Days per... Total GHz/GPU Days... Work Type Found P-1 LL/DC Factor Work Saved Factoring Saved DC TF 43 0 43 1.964 24.110 84.493 1,036.735 LL TF 148 87 296 4.055 181.016 600.269 26,790.486 P-1 35 70 57.800 176.849 2,023.006 6,189.743 Total 226 87 409 2,707.769 34,016.965 A follow up question, how do we make the data easy to read and understand? Last fiddled with by chalsall on 2011-11-30 at 03:07 |
![]() |
![]() |
![]() |
#2 |
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3·29·83 Posts |
![]()
I'll just make suggestion: Perhaps divide P-1 credit by 2.5-3 so that we can compare GPU-Days to CPU-Days directly?
Also, what does "Average GHz/GPU Days per Work Saved" mean? And in the last column, "27,368.183" -- is that GHz-Days of LL tests saved, or CPU or GPU-Days of LL tests saved? Last fiddled with by chalsall on 2011-11-30 at 03:12 Reason: Edited (slightly) for this new thread's context. |
![]() |
![]() |
![]() |
#3 | ||
If I May
"Chris Halsall"
Sep 2002
Barbados
28D116 Posts |
![]() Quote:
Quote:
Answer to second question: last column is the total amount of LL (and DC if appropriate) GHzDays time saved, plus the amount of P-1 GHzDays if (and only if) it actually was saved (read: a factor was found before P-1 had been done). Perhaps it would help if I color coded the individual cells to show which were GPU days and which were GHz? Last fiddled with by chalsall on 2011-11-30 at 03:11 Reason: Edited (slightly) for this new thread's context. |
||
![]() |
![]() |
![]() |
#4 | ||
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
11100001101012 Posts |
![]() Quote:
In other words, to compare 3 GPU-Days to 50 GHz-Days, I make the conversion anyways, so why not put the conversion into the chart? Hopefully that makes sense... Quote:
Yes please, color code would make it much nicer; that, or put the units into each cell and have simpler titles up top. Last fiddled with by chalsall on 2011-11-30 at 03:13 Reason: Edited (slightly) for this new thread's context. |
||
![]() |
![]() |
![]() |
#5 |
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3×29×83 Posts |
![]()
Maybe something like this:
Code:
<html> <h1>Overall System Progress</h1><p><table class="factdepth" width="100%"> <tr><th>Work Type</th><th>Factors Found</th><th>P-1 Tests Saved</th><th>LL/DC Tests Saved</th><th>Average Work Done per Factor</th><th>Average Work Saved per Factor</th><th>Total Factoring Work Done</th><th>Total Work Saved</th></tr> <tr><td nowrap class="fdr"><b>DC TF </b></td><td>44</td><td>0</td><td>44</td><td>2.093 GPU-Days</td><td>24.145 GHz-Days</td><td class="fdr">92.093 GPU-Days</td><td class="fds">1,062.381 GHz-Days</td></tr> <tr><td nowrap class="fdr"><b>LL TF </b></td><td>151</td><td>90</td><td>302</td><td>4.101 GPU-Days</td><td>181.246 GHz-Days</td><td class="fdr">619.307 GPU-Days</td><td class="fds">27,368.183 GHz-Days</td></tr> <tr><td nowrap class="fdr"><b>P-1</b></td><td>35</td><td> </td><td>70</td><td>58.842/2.6 CPU-Days</td><td>176.849 GHz-Days</td><td class="fdr">2,059.472/2.6 CPU-Days</td><td class="fds">6,189.743 GHz-Days</td></tr> <tr class="fdr"><th>Total</th><td>230</td><td>90</td><td>416</td><td> </td><td> </td><td>92*100+619*100+2059 GHz-Days</td><td>34,620.309 GHz-Days</td></tr> </table><p>Work completed listed in GPU-Days and CPU-Days. Work saved listed in GHz-Days. <br>1 GPU-Day = 100 GHz-Days <br>1 CPU-Day = 2.6 GHz-Days</p> </table></td></tr> </table> </html> Last fiddled with by Dubslow on 2011-11-30 at 02:48 |
![]() |
![]() |
![]() |
#6 | |
If I May
"Chris Halsall"
Sep 2002
Barbados
35×43 Posts |
![]() Quote:
One problem I see with this solution is that there are still several very old CPUs participating in GIMPS. Also, I would like to try find a formula which we all can agree on to get a "GPU" metric which can fairly be compared to (CPU) GHzDays. The reason is this is what PrimeNet uses to award credit, and (thanks to James) is what my system calculates for work saved. As an example, earlier today this factor was found: Code:
+----------+------------------------+----------+---------+-----------+-----------+-----------+ | Exponent | Factor | BitLevel | GHzDays | GHzDaysLL | GHzDaysDC | GHzDaysP1 | +----------+------------------------+----------+---------+-----------+-----------+-----------+ | 51986681 | 1572812258497661927719 | 70.41383 | 4.59978 | 99.593002 | 99.593002 | 3.6537394 | +----------+------------------------+----------+---------+-----------+----------+------------+ So I guess my fundamental question is, do you (and others) think that dividing the calculated GHzDays by 100 to get "GPUDays" reasonable when the work is done by a GPU to then be able to compare to work done by CPUs? |
|
![]() |
![]() |
![]() |
#7 |
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3·29·83 Posts |
![]()
The dividing by 100 came from the fact that the 'average' GPU gets around 100 GHz-days credit per day; hence why I suggested divide by the 'avergage' cpu throughput. The other suggestion was GHz-Days/100*3 the 'normalized credit' to compare them
|
![]() |
![]() |
![]() |
#8 |
Romulan Interpreter
"name field"
Jun 2011
Thailand
2·17·293 Posts |
![]()
We could make a poll-like thread. For me the formula would fit reasonably. With the current GPU/CPU load, I can do about 60 mfaktc bitlevels per day on DC-front, for a total of a little more then 100 GHz-days in 24 hours, and I need about 24 hours to DC one exponent of the same size, for which Gimps is giving me 27-29GHz-days of credit.
|
![]() |
![]() |
![]() |
#9 |
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3·29·83 Posts |
![]()
Which formula? Though the poll is a good idea.
Last fiddled with by Dubslow on 2011-11-30 at 04:21 |
![]() |
![]() |
![]() |
#10 |
Romulan Interpreter
"name field"
Jun 2011
Thailand
2·17·293 Posts |
![]()
your formula. like dividing by 100 and multiplying by 2.6, or some values around. The poll is more intended for some older-GPU, for which I have no idea how the numbers combine. And it can not really be a poll, that is why I said "poll-like", because checking boxes is not enough. People must comment, write down numbers. This current thread is ok, in fact we do not need a new one.
I think that every gpu272 user, or every guy who has a GPU, should post three rows like: 1. I can do xxxxx GHz-days per day (per 24 hours, xx hours, whatever) of TF using mfaktc/mfakto/else/others. 2. I can complete one (two, many, how many?) DC-front assignments (that is 25-29M expos, else?) in XXX hours on my GPU using CudaLucas/else/others. 3. I can complete one (two, many, how many?) LL-front assignments (that is 45-60M, else?) in XXX hours on my GPU using CudaLucas/else/others. Optional line: 4. I have GPU xxxx running on yyyy (OS name, 32/64 bits), CPU xxxx (relevant for mfaktX, who is using also CPU power, therefore cutting from the work CPU can do, P95 P-1, etc). Then we can see how they compare**. It could be also a good "reference" benchmark thread, for the people who try to set up a new GPU. When I did that I always wondered myself "are my numbers (ms/iteration) good enough for my hardware? Should I use different setting for the ini files? Can I improve my output if I adjust this and that?" etc. It should be very useful to have a "benchmark" thread. One could know at once that he is doing something wrong if he sees his numbers are half of what other people with compatible hardware get. edit **: compare (1) against (2), and then (1) against (3). For CudaLucas (2) and (3) are quite disproportional. One LL-front test at 50M should theoretically take maximum 8 times more time then one DC test, even if you use "school-multiplication". That is because you would need a double number of iterations, and each iteration has double size, involving at most 4 times (school grade) multiplications. Under no circumstances a LL test should take more then 8 times a DC test. However I have heard people saying that a DC test needs 30 hours and a LL test needs 300 hours, for some older GPU. Obviously, that kind of GPU has some bottleneck somewhere, when used for bigger numbers. On the other side, LL tests should take about 5 times more time then DC tests, with the optimized FFT. That is not always the case, I have heard people talking about 3-4 times longer for LL compared to DC (like 5-6-7 days for a LL, but two days for a DC). Obviously here their GPU has some "waiting/idle" periods for small numbers, they are not enough busy, and they should play with program's settings. Last fiddled with by LaurV on 2011-11-30 at 04:57 |
![]() |
![]() |
![]() |
#11 | |
If I May
"Chris Halsall"
Sep 2002
Barbados
35×43 Posts |
![]() Quote:
I have updated the page with the "GPUDays = GHzDays / 100 * 3" formula where appropriate, and added asterisks to indicate which fields are GPUDays. We can tweak the values for "100" and "3" once we get more feedback. I have also modified the table headers a bit to be (hopefully) clearer. (And, it's now 0130 Barbados Time. Bed calls... Thank goodness tomorrow is a holiday (Independence Day) here in Bim....) |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
What percentage of CPUs/GPUs have done a double check? | Mark Rose | Data | 4 | 2016-06-17 14:38 |
Anyone using GPUs to do DC, LL or P-1 work? | chalsall | GPU to 72 | 56 | 2014-04-24 02:36 |
GPUs impact on TF | petrw1 | GPU Computing | 0 | 2013-01-06 03:23 |
LMH Factoring on GPUs | Uncwilly | LMH > 100M | 60 | 2012-05-15 08:37 |
Compare interim files with different start shifts? | zanmato | Software | 12 | 2012-04-18 14:56 |