mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet > GPU to 72

Reply
 
Thread Tools
Old 2011-11-30, 05:42   #12
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

100101100010102 Posts
Default

Happy Independence Day! And have a good sleep! We still need you! :P
LaurV is offline   Reply With Quote
Old 2011-11-30, 06:08   #13
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3·29·83 Posts
Default

Quote:
Originally Posted by chalsall View Post
I can be slow some times... I understand your point now.

I have updated the page with the "GPUDays = GHzDays / 100 * 3" formula where appropriate, and added asterisks to indicate which fields are GPUDays.
Well... that's not GPU days, it's a 'normalized GHz-Days'. GPU-Days would just be 1/100, without the *3. I personally think that 1/100 on GPU credit and then /3 on CPU credit would be slightly more useful, because it gives an idea of how much time has been put in. (Of course the only difference is a factor of 3, but on the 'ease of interpretation' side I think GPUDay-CPUDay method is clearer.)
Quote:
Originally Posted by chalsall View Post
We can tweak the values for "100" and "3" once we get more feedback.

The easiest way to get a half decent number is to take the GHz-Days reported in the last, say, x days, and divide that by the (number of cores *x). It will be hard to get a good core count though...

(It seems to me that it needs to be higher than 3, because I'm pretty sure P-1 is more useful then the current report indicates...)

(Also, if the not-GHz-Days entries are marked, then perhaps the header should just say GHz-Days...)

Last fiddled with by Dubslow on 2011-11-30 at 06:16
Dubslow is offline   Reply With Quote
Old 2011-11-30, 06:32   #14
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

100101100010102 Posts
Default

Quote:
Originally Posted by Dubslow View Post
The easiest way to get a half decent number is to take the GHz-Days reported in the last, say, x days, and divide that by the (number of cores *x). It will be hard to get a good core count though...
I don't know how relevant that would be. People are reporting results in "bunches", "barrels", whatever. See xyzzy, when he does his reports, whole Primenet output (teraflops/day) doubles for one day.
One can report in a day the assignments he completed in one month.

Last fiddled with by LaurV on 2011-11-30 at 06:33
LaurV is offline   Reply With Quote
Old 2011-11-30, 06:44   #15
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

The longer the period, the more average the result, but then again, the less accurate because GIMPS is always gaining and losing comps...
Dubslow is offline   Reply With Quote
Old 2011-11-30, 07:30   #16
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

Quote:
Originally Posted by chalsall View Post
(And, it's now 0130 Barbados Time. Bed calls... Thank goodness tomorrow is a holiday (Independence Day) here in Bim....)
Just figured out I have an exam, tomorrow/today, Wednesday...
Dubslow is offline   Reply With Quote
Old 2011-11-30, 15:00   #17
diamonddave
 
diamonddave's Avatar
 
Feb 2004

25·5 Posts
Default

To be completely honest,

Why should we scale back work produced by GPU?

1) On a site dedicated to GPU, why do we feel the need to lower the REAL contribution of workers?

2) Lets compare apples with apples. Please consider having an option (checkbox enabled by default) to see the PrimeNet Ghz-Days value. since GPU to 72 is sort of a subset of PrimeNet, we should at least keep the same numbers as PrimeNet by default.

What would be the possible side effect of scaling TF contribution?

1) Less incentive to actually acquire GPU since work done with them are now not valued as equally as work done by CPU?

2) People who have old CPU now just quit doing TF because even tho their PC isn't suitable to DC or LL anymore, they now have even less of an incentive to do TF since their contribution is viewed as worthless.

If there's to be any scaling. There's only 1 metric that should be kept in mind. It's potential work saved. If doing a bit level save 1/72 of a LL test, I would expect to get at least 1/72 the CPU days required to do the LL test.
diamonddave is offline   Reply With Quote
Old 2011-11-30, 18:21   #18
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

2·5·7·139 Posts
Default

Quote:
Originally Posted by diamonddave View Post
To be completely honest,

Why should we scale back work produced by GPU?

1) On a site dedicated to GPU, why do we feel the need to lower the REAL contribution of workers?
We're not. Note that on the "Workers Progress" page (and your Completed Assignments page) the "non-Normalized" GHzDays value remains (and shall always).

Quote:
Originally Posted by diamonddave View Post
2) Lets compare apples with apples. Please consider having an option (checkbox enabled by default) to see the PrimeNet Ghz-Days value. since GPU to 72 is sort of a subset of PrimeNet, we should at least keep the same numbers as PrimeNet by default.
Comparing apples with apples is exactly the issue we're dealing with here. It comes out of the fact that we're wishing to compare the amount of work done by GPUs vs. the amount of work saved for CPUs. And it is to be used only in this one single report.

Without this normalization, someone would look at the Overall Status and see (for example) that the GPUs spent 63,708 GHzDays to save 28,089 GHzDays of work. Doesn't make sense.

Quote:
Originally Posted by diamonddave View Post
What would be the possible side effect of scaling TF contribution?

1) Less incentive to actually acquire GPU since work done with them are now not valued as equally as work done by CPU?
I would argue the opposite. Someone who didn't understand what was going on could erroneously believe that GPUs are not as effective as they empirically are.

Quote:
Originally Posted by diamonddave View Post
2) People who have old CPU now just quit doing TF because even tho their PC isn't suitable to DC or LL anymore, they now have even less of an incentive to do TF since their contribution is viewed as worthless.
I think we're already there. And to be perfectly honest, unless the CPUs are being used for some other function (including a space heater), I don't think this is a bad thing.

Quote:
Originally Posted by diamonddave View Post
If there's to be any scaling. There's only 1 metric that should be kept in mind. It's potential work saved. If doing a bit level save 1/72 of a LL test, I would expect to get at least 1/72 the CPU days required to do the LL test.
But as I said above, PrimeNet has a long established GHzDays credit metric which I plan to continue using for consistency.

Again, this "normalization" coefficient (currently 3 / 100) for GPUs is only to be used when comparing to the amount of CPU GHzDays saved.

What I am trying to determine is what is a reasonable value for the coefficient. As in, does the average GPU really produce 100 GHzDays of work a day? And does the average CPU really produce 3 GHzDays a day? My hunch is the 100 should be higher, and the 3 lower.

Last fiddled with by chalsall on 2011-11-30 at 18:21
chalsall is online now   Reply With Quote
Old 2011-11-30, 19:07   #19
diamonddave
 
diamonddave's Avatar
 
Feb 2004

25×5 Posts
Default

Quote:
Originally Posted by chalsall View Post
Comparing apples with apples is exactly the issue we're dealing with here. It comes out of the fact that we're wishing to compare the amount of work done by GPUs vs. the amount of work saved for CPUs. And it is to be used only in this one single report.

Without this normalization, someone would look at the Overall Status and see (for example) that the GPUs spent 63,708 GHzDays to save 28,089 GHzDays of work. Doesn't make sense.
Instead of having one set of number that everyone understand (PrimeNet Ghz/Days) we are introducing a new one. This will be a never ending quest to adjust the scalling as better GPU and CPU come out to market in the years to come. Everyone know what a MIPS, a gram or a meter is. Why are we trying to introduce new ones like Library of Congress or Volkswagen Bugs. As we introduce new metrics it only confuse people.

As we now see, one can't compare TF-Ghz/Days to LL-Ghz/Days. One is highly parallel and scale well with GPU architecture and the other one not quite.

Quote:
Originally Posted by chalsall View Post
But as I said above, PrimeNet has a long established GHzDays credit metric which I plan to continue using for consistency.
Again I would appeal to put a checkbox to have the PrimeNet metric by default.

Quote:
Originally Posted by chalsall View Post
Again, this "normalization" coefficient (currently 3 / 100) for GPUs is only to be used when comparing to the amount of CPU GHzDays saved.

What I am trying to determine is what is a reasonable value for the coefficient. As in, does the average GPU really produce 100 GHzDays of work a day? And does the average CPU really produce 3 GHzDays a day? My hunch is the 100 should be higher, and the 3 lower.
I understand what you are trying to do and see were your numbers come from, but they are off...

If we are to use a 100 Ghz/Day GPU for the base, we need to compare with a similar PC. No one will have a GTX-560 in a Core 2 PC. Otherwise we need to dust of a GTX 220 and see it's performance... And then maybe we will just end up dividing by 1!

Lets take my PC as example.

I have a 4 core system (i5-2600K) that can do roughly 4 49M exponent in 17.5 days for a total credit of 366.4 Ghz-Day. So a contribution of 20.9 Ghz/Day per Day.

Now this doesn't take into account that my GPU could also be contributing to LL.
diamonddave is offline   Reply With Quote
Old 2011-11-30, 19:53   #20
bcp19
 
bcp19's Avatar
 
Oct 2011

12478 Posts
Default

Quote:
Originally Posted by chalsall View Post
What I am trying to determine is what is a reasonable value for the coefficient. As in, does the average GPU really produce 100 GHzDays of work a day? And does the average CPU really produce 3 GHzDays a day? My hunch is the 100 should be higher, and the 3 lower.
The big questions become: a) what is considered an 'average' GPU? b) what is considered an 'average' CPU.

I have 3 GPU's and 5 'CPU's':
AMD Turion 64 X2 @ 1.8 GHz running ~7M P-1's = ~1.38 GHzD/Day
Core 2 Quad Q8200 @ 2.33GHz running 2xLL ~54M = ~5.16 GHzD/Day
Intel Core i7 Q 740 @ 1.73GHz running 4xDC ~25M = ~9.29 GHzD/Day
Intel Core i5-2400 @ 3.10GHz running 2xDC ~25M = ~10.01 GHzD/Day
Intel Core i5-2500K @ 3.30GHz running 2xLL ~45M = ~7.98 GHzD/Day

GTS 450 running DC TF = ~93.04 GHzD/Day
GTX 560 running DC TF = ~176.43 GHzD/Day
GTX 560Ti running DC TF = ~194.81 GHzD/Day

12 cores doing 33.82 GHzD/Day = 2.8183 GHzD/Day avg
10 cores doing 32.44 GHzD/Day = 3.244 GHzD/Day (took out the AMD as it is nearly 'obsolete')
3 GPU doing 464.28 GHzD/Day = 154.76 GHzD/Day avg.

Quote:
Originally Posted by diamonddave
If we are to use a 100 Ghz/Day GPU for the base, we need to compare with a similar PC. No one will have a GTX-560 in a Core 2 PC. Otherwise we need to dust of a GTX 220 and see it's performance... And then maybe we will just end up dividing by 1!
The 560 Ti was in a Core2 Quad and the 560 was in a Core2 Duo before being put into the 2400 and 2500. The quad ended up with the 450 which for the most part *is* a 100GHz/Day GPU.
bcp19 is offline   Reply With Quote
Old 2011-11-30, 20:01   #21
diamonddave
 
diamonddave's Avatar
 
Feb 2004

16010 Posts
Default

Quote:
Originally Posted by bcp19 View Post
Intel Core i5-2400 @ 3.10GHz running 2xDC ~25M = ~10.01 GHzD/Day
Intel Core i5-2500K @ 3.30GHz running 2xLL ~45M = ~7.98 GHzD/Day
Since we are trying to determine benchmark I think we should assume all 4 core would be assigned to LL. So roughly a doubling in performance.

Quote:
Originally Posted by Prime95 View Post
// In Primenet v4 we used a 90 MHz Pentium CPU as the benchmark machine
// for calculating CPU credit. The official unit of measure became the
// P-90 CPU year. In 2007, not many people own a plain Pentium CPU, so we
// adopted a new benchmark machine - a single core of a 2.4 GHz Core 2 Duo.
// Our official unit of measure became the C2GHD (Core 2 GHz Day). That is,
// the amount of work produced by the single core of a hypothetical
// 1 GHz Core 2 Duo machine. A 2.4 GHz should be able to produce 4.8 C2GHD
// per day.
//
// To compare P-90 CPU years to C2GHDs, we need to factor in both the
// the raw speed improvements of modern chips and the architectural
// improvements of modern chips. Examining prime95 version 24.14 benchmarks
// for 640K to 2048K FFTs from a P100, PII-400, P4-2000, and a C2D-2400
// and compensating for speed differences, we get the following architectural
// multipliers:
//
// One core of a C2D = 1.68 P4.
// A P4 = 3.44 PIIs
// A PII = 1.12 Pentium
//
// Thus, a P-90 CPU year = 365 days * 1 C2GHD *
// (90MHz / 1000MHz) / 1.68 / 3.44 / 1.12
// = 5.075 C2GHDs
So the basis for Ghz/Day was a Core 2 (Dual) introduced sometime in mid 2006... To expect that a mid 2010 (GTX 450) card would come with that system is a bit of a stretch! :-)

Last fiddled with by diamonddave on 2011-11-30 at 20:16
diamonddave is offline   Reply With Quote
Old 2011-11-30, 20:21   #22
bcp19
 
bcp19's Avatar
 
Oct 2011

12478 Posts
Default

Quote:
Originally Posted by diamonddave View Post
Now this doesn't take into account that my GPU could also be contributing to LL.
To me, this is quite a waste in that my 560 can do an LL in ~1/2 the time my 2400 can, so it would just be getting 10-11 GHzD/Day for a total of 30 GHzD/Day on the machine with '5' LL running. Using this as an example, the 450 would probably be slower doing LL than the 2400.

Quote:
Originally Posted by diamonddave View Post
Since we are trying to determine benchmark I think we should assume all 4 core would be assigned to LL. Soo roughly a doubling in performance.
Why use all 4 cores as the benchmark? While it is true you can run 4 cores LL and CUDALucas at the same time, you can only run 2 cores LL and 2 mfaktc/o at the same time. You should therefore be comparing the 'loss' of 2 CPU LL's (debatable as to whether to include the possible CUDALucas here) to the 'gain' of 2 mfaktc/o TF's.

Quote:
Originally Posted by diamonddave View Post
So the basis for Ghz/Day was a Core 2 (Dual) introduced sometime in mid 2006... To expect that a mid 2010 (GTX 450) card would come with that system is a bit of a stretch! :-)
Never said it came with it, YOU said "No one will have a GTX-560 in a Core 2 PC" which I did, making your statement wrong.

Last fiddled with by bcp19 on 2011-11-30 at 20:24
bcp19 is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
What percentage of CPUs/GPUs have done a double check? Mark Rose Data 4 2016-06-17 14:38
Anyone using GPUs to do DC, LL or P-1 work? chalsall GPU to 72 56 2014-04-24 02:36
GPUs impact on TF petrw1 GPU Computing 0 2013-01-06 03:23
LMH Factoring on GPUs Uncwilly LMH > 100M 60 2012-05-15 08:37
Compare interim files with different start shifts? zanmato Software 12 2012-04-18 14:56

All times are UTC. The time now is 14:17.


Fri Jul 16 14:17:45 UTC 2021 up 49 days, 12:05, 2 users, load averages: 1.76, 1.73, 1.73

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.