mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet > GPU to 72

Reply
 
Thread Tools
Old 2011-11-30, 20:35   #23
diamonddave
 
diamonddave's Avatar
 
Feb 2004

25×5 Posts
Default

Quote:
Originally Posted by bcp19 View Post
Why use all 4 cores as the benchmark? While it is true you can run 4 cores LL and CUDALucas at the same time, you can only run 2 cores LL and 2 mfaktc/o at the same time. You should therefore be comparing the 'loss' of 2 CPU LL's (debatable as to whether to include the possible CUDALucas here) to the 'gain' of 2 mfaktc/o TF's.
And if you have 2 card in your system, then we can safely say that a i5-2500K dedicated to doing LL test would produce 0 Ghz/Day?

They are trying to figure out what an average system would produce if it was doing LL test and then compare said base system with a base GPU.

Quote:
Originally Posted by bcp19 View Post
Never said it came with it, YOU said "No one will have a GTX-560 in a Core 2 PC" which I did, making your statement wrong.
My statement (more a figure of speech) was simple... Todays average system come with an Average GPU. Lets not compare a 5 year old CPU with a 9 month GPU. Since on average people don't put flaming hot GPU in a crappy machine.

Last fiddled with by diamonddave on 2011-11-30 at 20:35
diamonddave is offline   Reply With Quote
Old 2011-11-30, 21:05   #24
ET_
Banned
 
ET_'s Avatar
 
"Luigi"
Aug 2002
Team Italia

12CF16 Posts
Default

I'm feeling somehow out of place here...

I put an "old" GTX275 (less than one year of age) on my I5-750, and using it to speed up both TF and LL-D work while three other cores run mprime and the fourth gmp-ecm.

I don't care to hurry reloading my system before it finishes its manual assignments, as there will always be some task running.

I don't care choosing the best possible mesh of assignment to squeeze the last picobit.

Heck, I don't even overclock, though it would be easy!

And I don't care p90 years, PII-400 years, GHz/day or whatever.

I'm here for fun, and to give a nanohelp to this project. All this disquisition about speed comparison seems to me like adolescent rants on "who has it bigger".

And now that I said it, I feel a bit like Davieddy and Cheesehead.

Luigi
ET_ is offline   Reply With Quote
Old 2011-11-30, 21:11   #25
diamonddave
 
diamonddave's Avatar
 
Feb 2004

25·5 Posts
Default

Quote:
Originally Posted by ET_ View Post
And I don't care p90 years, PII-400 years, GHz/day or whatever.
Let's go old school and bring back p90 years.
diamonddave is offline   Reply With Quote
Old 2011-11-30, 23:19   #26
bcp19
 
bcp19's Avatar
 
Oct 2011

2A716 Posts
Default

Quote:
Originally Posted by diamonddave View Post
And if you have 2 card in your system, then we can safely say that a i5-2500K dedicated to doing LL test would produce 0 Ghz/Day?
Obviously, if you are going to compare 4 cores to 4 cores, you would want to know what all 4 could do (part of why I said lose 2 LL to gain 2 TF).

If you have a GT 520 and it is maxed out by 1 core, I would think comparing it to a 4 core system would be a bit unfair.

Quote:
They are trying to figure out what an average system would produce if it was doing LL test and then compare said base system with a base GPU.

My statement (more a figure of speech) was simple... Todays average system come with an Average GPU. Lets not compare a 5 year old CPU with a 9 month GPU. Since on average people don't put flaming hot GPU in a crappy machine.
The problem is though, that today's "average" systems do not come with "average" GPU's. I bought the 2400 mainly because of price vs performance. On the website under CPU benchmarks, the 2400 had some of the best benchmarks, so checking prices, I decided to get one. Now, I personally do not view the 2400 as an "average" system, but it came with a GT 520 in it, which is probably very close to an "average" GPU. I upgraded it to the 560 Ti which I consider above average (GTX 580/590 to me would fall under flaming hot).

So, let's just say the 2400 is "average", would you consider an i7 920 be high end? How about an i7 950? Or an i7 990X? Maybe even your i7 2600K? If you look at http://mersenne-aries.sili.net/throu...2288&mhz4=3500 you will see from the benchmarking, with a ~41,280,000 exponent the 2400 would complete 5.54/core/day, the 920 - 4.24, the 950 - 4.26 and the 990X 5.27. The 2600 gets 6.15 (which you can look up on there). Since the 990X and 2600K are the only ones that can outperform the 2400 in combined throughput (the 990X owing to the 6 cores), the $140/$800 savings makes it the best choice (almost doubling the price for a mere 11% gain torpedoes the 2600K), and for me, makes the 2400 an above average to high end CPU, and the GT 520 that came with it below average.
bcp19 is offline   Reply With Quote
Old 2011-11-30, 23:36   #27
diamonddave
 
diamonddave's Avatar
 
Feb 2004

25×5 Posts
Default

Quote:
Originally Posted by bcp19 View Post
Obviously, if you are going to compare 4 cores to 4 cores, you would want to know what all 4 could do (part of why I said lose 2 LL to gain 2 TF).

If you have a GT 520 and it is maxed out by 1 core, I would think comparing it to a 4 core system would be a bit unfair.



The problem is though, that today's "average" systems do not come with "average" GPU's. I bought the 2400 mainly because of price vs performance. On the website under CPU benchmarks, the 2400 had some of the best benchmarks, so checking prices, I decided to get one. Now, I personally do not view the 2400 as an "average" system, but it came with a GT 520 in it, which is probably very close to an "average" GPU. I upgraded it to the 560 Ti which I consider above average (GTX 580/590 to me would fall under flaming hot).

So, let's just say the 2400 is "average", would you consider an i7 920 be high end? How about an i7 950? Or an i7 990X? Maybe even your i7 2600K? If you look at http://mersenne-aries.sili.net/throu...2288&mhz4=3500 you will see from the benchmarking, with a ~41,280,000 exponent the 2400 would complete 5.54/core/day, the 920 - 4.24, the 950 - 4.26 and the 990X 5.27. The 2600 gets 6.15 (which you can look up on there). Since the 990X and 2600K are the only ones that can outperform the 2400 in combined throughput (the 990X owing to the 6 cores), the $140/$800 savings makes it the best choice (almost doubling the price for a mere 11% gain torpedoes the 2600K), and for me, makes the 2400 an above average to high end CPU, and the GT 520 that came with it below average.
Semantics, lets just say an average system will be the best price/performance for both CPU & GPU system (excluding anything that that cost more then a small car ). Then I would agree both the i5-2500K paired with the current GT 560 TI would be pretty close to average.
diamonddave is offline   Reply With Quote
Old 2011-11-30, 23:38   #28
KyleAskine
 
KyleAskine's Avatar
 
Oct 2011
Maryland

2×5×29 Posts
Default

Quote:
Originally Posted by bcp19 View Post
So, let's just say the 2400 is "average", would you consider an i7 920 be high end? How about an i7 950? Or an i7 990X? Maybe even your i7 2600K? If you look at http://mersenne-aries.sili.net/throu...2288&mhz4=3500 you will see from the benchmarking, with a ~41,280,000 exponent the 2400 would complete 5.54/core/day, the 920 - 4.24, the 950 - 4.26 and the 990X 5.27. The 2600 gets 6.15 (which you can look up on there). Since the 990X and 2600K are the only ones that can outperform the 2400 in combined throughput (the 990X owing to the 6 cores), the $140/$800 savings makes it the best choice (almost doubling the price for a mere 11% gain torpedoes the 2600K), and for me, makes the 2400 an above average to high end CPU, and the GT 520 that came with it below average.
Just to play devil's advocate, the value in a 2500k or 2600k over a 2400 is the unlocked multiplier. They are all relatively the same at stock. Bump each core a GHz and they will perform better though.
KyleAskine is offline   Reply With Quote
Old 2011-12-01, 00:48   #29
Uncwilly
6809 > 6502
 
Uncwilly's Avatar
 
"""""""""""""""""""
Aug 2003
101×103 Posts

23·1,223 Posts
Default

Quote:
Originally Posted by ET_ View Post
Heck, I don't even overclock, though it would be easy!

And I don't care p90 years, PII-400 years, GHz/day or whatever.

I'm here for fun, and to give a nanohelp to this project. All this disquisition about speed comparison seems to me like adolescent rants on....
Here Here!
Uncwilly is online now   Reply With Quote
Old 2011-12-01, 03:02   #30
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

It occurs to me that if we got this right then we could compare the relative value of TF vs. P-1 as far as GIMPS as a whole goes.
Dubslow is offline   Reply With Quote
Old 2011-12-01, 03:03   #31
S34960zz
 
Feb 2011

22×13 Posts
Default

Quote:
Originally Posted by Dubslow View Post
... the fact that the 'average' GPU gets around 100 GHz-days credit per day; ...
As noted by others since the quoted post, GPUs vary significantly in throughput capability. Quadro FX 2800M (CUDA compute capability 1.1) fed by an i7-840QM 1.86GHz yields approx. 25-27 GHz-day per 24 hours.

I'd suggest sticking with GHz-day as the units for comparison.
S34960zz is offline   Reply With Quote
Old 2011-12-01, 03:36   #32
Christenson
 
Christenson's Avatar
 
Dec 2010
Monticello

5·359 Posts
Default

Let me weigh in a bit here.....
1) GHz-Days is a relatively good measure of CPU effort.
2) GPUs aren't directly comparable to CPUs....for lots of reasons
a) 10-100x less effort on TF, at the expense of a CPU core
b) Reasonably fast LL tests...I have an exponent assigned tonight at 28.8M, running an instance of CUDALucas on a GT480, and I expect it will be done in 50 hours, though I doubt I will reach the machine until a day or two after that. It's also running an instance of mfaktc.
c) no possibility of doing P-1 just yet...no code! (And Chalsall has already given us GPUto72 for Xmas, so that means Santa was being *really* nice!)

So a GHz-day alone has serious problems. We could, I suppose, measure the effect on the GIMPS project, that is do everything in terms of, say, 25M LL tests saved...but when there aren't any 75 bit exponents left to TF (Chalsall to thank for pointing us that way!), those factors are gonna get rather expensive to find with TF, as measured in wall-clock time.

I'd say the best way to keep things straight is to remember that TF GHz-days and LL GHz-days simply aren't interchangeable, any more than GPUs and CPUs are interchangeable.
Christenson is offline   Reply With Quote
Old 2011-12-01, 04:05   #33
kladner
 
kladner's Avatar
 
"Kieren"
Jul 2011
In My Own Galaxy!

27AE16 Posts
Default

Quote:
Originally Posted by Uncwilly View Post
Here Here!<snip>
I second the sentiment, and its antecedent: "I'm here for fun, and to give a nanohelp to this project." (The beer-drinking smilies are cute, but one instance on a page is enough. Hence the <snip>;)

Last fiddled with by kladner on 2011-12-01 at 04:06 Reason: replaced a period with a semicolon
kladner is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
What percentage of CPUs/GPUs have done a double check? Mark Rose Data 4 2016-06-17 14:38
Anyone using GPUs to do DC, LL or P-1 work? chalsall GPU to 72 56 2014-04-24 02:36
GPUs impact on TF petrw1 GPU Computing 0 2013-01-06 03:23
LMH Factoring on GPUs Uncwilly LMH > 100M 60 2012-05-15 08:37
Compare interim files with different start shifts? zanmato Software 12 2012-04-18 14:56

All times are UTC. The time now is 14:18.


Fri Jul 16 14:18:10 UTC 2021 up 49 days, 12:05, 2 users, load averages: 1.77, 1.74, 1.73

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.