mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet

Reply
 
Thread Tools
Old 2009-04-24, 03:58   #188
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

769210 Posts
Default

Quote:
Originally Posted by Kevin View Post
Ultimately, throughput is maximized if every system does the task it's (relatively) the best at compared to other machines.
Not necessarily.

Sometimes the analysis of such maximization neglects important factors, which could render such analysis misleading.

How about this factor: the relative satisfactions various participants derive from performing different types of assignments, which could determine whether some participants drop out or stay in.

If every one of 10,000 systems is doing what it's relatively best at, does that necessarily result in more GIMPS throughput than a situation in which 9000 systems are doing what they're relatively best at and 3000 more systems are doing something that's slightly less efficient than what they're best at? Perhaps not, if the inefficiencies for the 3000 are minor.

Suppose (a) P-1 is a bottleneck that limits the rate of LL-only assignments available, (b) my system is more efficient at TF than at P-1, and (c) some users requesting LL-only assignments will quit GIMPS rather than be satisfied with non-LL assignments. Then when I choose to perform P-1 rather than TF, for which my system is more suited when that's considered only in isolation, I may improve overall GIMPS throughput by reducing the number of LL-only dropouts.

When someone drops out, that cancels out the effect of a whole lot of differences in relative efficiency of P-1 vs. TF.

Quote:
If a modern computer has a ratio of 1:1 between time for a P-1 test and time for a TF test, and my computer has a ratio of 10:1, I'd be greatly under-utilizing my system by doing P-1 testing.
Correct, when considering only the sort of hypothetical factor (b) above.

Quote:
However, since there is a shortage of P-1 testing,
... thus adding a factor of type (a)

Quote:
I'm willing to "under-utilize" my system to a reasonable degree if it puts CPU time where it's needed more.
... thus indirectly adding a factor of type (c), because "needed more" means retaining more participants.

- -

Going back to your first statement,

Quote:
No, they're related.
How? You haven't explained.

As I pointed out, a PIII will be just as efficient at P-1 when lots of other systems are doing P-1 as when no other systems are doing P-1. So, how do you claim that P-1 efficiency is related to shortage of P-1 participants?

If someone else joins the P-1 effort, how does that raise or lower your PIII's efficiency of performing P-1? If someone else drops out of the P-1 effort, how does that raise or lower your PIII's efficiency of performing P-1?

Last fiddled with by cheesehead on 2009-04-24 at 04:06
cheesehead is offline   Reply With Quote
Old 2009-04-24, 06:25   #189
Kevin
 
Kevin's Avatar
 
Aug 2002
Ann Arbor, MI

433 Posts
Default

As I did explain in my last post, lack of people doing P-1 doesn't change my CPU's actually efficiency, it just changes how much inefficiency I'm willing to tolerate. If everybody was doing P-1 testing, there'd be no reason for me to do P-1 testing if my computer was better suited for something else. If nobody was doing P-1 testing, then I should be doing P-1 testing no matter how poorly my computer was suited for it. Now realize that there's a non-trivial middle ground (for people with no real preference either way), and I'd appreciate it if the discussion stayed focused on figuring out if my hardware situation should decisively push me in one direction or the other.

As for everything else you've said, if you really feel like trying to quantify the expected number of people that (under a hypothetical change of assignment rules) would quit GIMPS because the cache of P-1 tested exponents ran out a minute earlier because my PIII wasn't doing P-1, knock your socks off. In the mean time, hopefully I can get some useful information about how efficient my computer is at P-1 versus TF (though at this point, I'm just trying out P-1 myself to figure out).
Kevin is offline   Reply With Quote
Old 2009-04-24, 07:57   #190
garo
 
garo's Avatar
 
Aug 2002
Termonfeckin, IE

22×691 Posts
Default

Do a benchmark and compare the times for TF at the bits you are doing and LL testing at the FFT current P-1 assignments are at with any Core 2 benchmarks. You'll get an idea of the relative speeds. You can find tons of Core benchmarks on the benchmark thread and page.
garo is offline   Reply With Quote
Old 2009-04-24, 15:57   #191
petrw1
1976 Toyota Corona years forever!
 
petrw1's Avatar
 
"Wayne"
Nov 2006
Saskatchewan, Canada

22·3·17·23 Posts
Default

All I've ever done to determine where MY PC performs best is to have the PC perform an assignment or two of each type I am interested in and compute the Points-Per-Day for each. Unless there is a BIG discrepancy I let my conscience guide me between my personal goals and the needs of the project.

Through this, I have learned that a PIII and a Duron (mine anyway) are twice as efficient doing TF up to 64 bits (LMH) than it is over 64 bits....something to do with the 32 bit architecture.

I find that P-1 needs a 'good' amount of 'good' memory to be efficient. Define 'good': The standard IT answer: It depends ... but I would suggest at least 500K with most of it available to GIMPS. And if your machine is new enough to hold 500K it is probably more-or-less fast enough too.

For LL and DC I find all my hardware performs pretty much as expected. That said, I keep large exponents away from small hardware simply because neither I nor the project want to wait MANY months for a result. Even if the day comes when the biggest project need is 100M digit LL I won't give it to my 2.4 Ghz PIV because I may not live to see it finish the test.

Quote:
The bottom line is still as Cheesehead suggests: Where does the project as a whole need the most help? Right now it is P-1.
Then I apply a little common sense. For example, my Duron does not even have enough RAM to do P-1 so I relegate it to TF ... and since it is so relegated I choose to have it do TF-LMH since it gets me twice the points.

My Quad certainly can do P-1 well but going beyond a couple cores doing P-1 really taxes the memory, especially in Stage 2.

Quote:
But maybe the REAL bottom, bottom line is: It's your hardware and your choice.
petrw1 is offline   Reply With Quote
Old 2009-04-24, 20:24   #192
Kevin
 
Kevin's Avatar
 
Aug 2002
Ann Arbor, MI

433 Posts
Default

I'm essentially doing the "try it out and see for yourself" method, the only problem is the apparent unreliability in estimating how long a P-1 test takes. Right now the estimated time to completion is May 15th, but I know that most likely it won't finish until June. I'm probably just going to wait to see how this test turns out, and either keep doing P-1 or switch to TF-LMH.
Kevin is offline   Reply With Quote
Old 2009-04-24, 22:38   #193
petrw1
1976 Toyota Corona years forever!
 
petrw1's Avatar
 
"Wayne"
Nov 2006
Saskatchewan, Canada

22×3×17×23 Posts
Default

Quote:
Originally Posted by Kevin View Post
I'm essentially doing the "try it out and see for yourself" method, the only problem is the apparent unreliability in estimating how long a P-1 test takes. Right now the estimated time to completion is May 15th, but I know that most likely it won't finish until June. I'm probably just going to wait to see how this test turns out, and either keep doing P-1 or switch to TF-LMH.
I find the Client and Server both underestimate P-1 for me by at least 20-50% so I can' really rely on them. Though they do get better the more P-1s you do. This doesn't help you if you want to estimate accurately for just 1.

There seems to be a general consensus that Stage 2 takes about twice as long as Stage 1 for P-1. So a rough estimate for the total elapsed would be to triple your Stage 1 estimate. If Stage 1 takes 1 day for 10% it will take 10 days to complete; Stage 2 will then take about 20 days for a total of 30.
petrw1 is offline   Reply With Quote
Old 2009-04-24, 22:46   #194
garo
 
garo's Avatar
 
Aug 2002
Termonfeckin, IE

ACC16 Posts
Default

Quote:
Originally Posted by Kevin View Post
I'm essentially doing the "try it out and see for yourself" method, the only problem is the apparent unreliability in estimating how long a P-1 test takes. Right now the estimated time to completion is May 15th, but I know that most likely it won't finish until June. I'm probably just going to wait to see how this test turns out, and either keep doing P-1 or switch to TF-LMH.
You don't need to do that. Just check the ratios of the benchmarks I mentioned.
garo is offline   Reply With Quote
Old 2009-04-24, 23:17   #195
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

769210 Posts
Default

Quote:
Originally Posted by Kevin View Post
the only problem is the apparent unreliability in estimating how long a P-1 test takes.
On my Pentium D 2.66 GHz, prime95 typically estimates a stage 1&2 P-1 in the 50xxxxxx range at a bit over 4 days, but it actually takes at least over 7 days. This underestimation is consistent, so I just routinely double, which includes a bit of fudge for interference by pesky higher-priority tasks (and my current resident infection, which may be Conflicker despite Conflicker-scanners' failure to see it), and am rarely surprised.

Last fiddled with by cheesehead on 2009-04-24 at 23:23
cheesehead is offline   Reply With Quote
Old 2009-04-26, 04:55   #196
petrw1
1976 Toyota Corona years forever!
 
petrw1's Avatar
 
"Wayne"
Nov 2006
Saskatchewan, Canada

22·3·17·23 Posts
Default

Quote:
Originally Posted by cheesehead View Post
This underestimation is consistent, so I just routinely double, which includes a bit of fudge for interference ...
More than one person is experiencing this significant underestimation for P-1 run times

AND

More than one person mentioned a while back that doing a Test / Status ...
OR sending new Completion dates when there are more than 10 or so P-1 assignments takes a LONG time.

Any chance something can be done to address these?
petrw1 is offline   Reply With Quote
Old 2009-04-26, 06:55   #197
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

11110000011002 Posts
Default

Quote:
Originally Posted by petrw1 View Post
More than one person mentioned a while back that doing a Test / Status ...

< snip >

takes a LONG time.
Several seconds longer than I recall from earlier, and I've easily noticed the difference, but do you really think this one rises to the level of needing a fix?

Quote:
Any chance something can be done to address these?
Sure -- just

download the source code,

study it,

write up a fix,

test the fix,

contemplate that your fix will be going out to tens of thousands of systems all over the world eventually,

balance that with the thought that GIMPS software isn't all that vital to survival of the human species,

have someone else review your work and independently test it,

and send it to George after it's tested okay.

I'm sure he'd thank you!

Last fiddled with by cheesehead on 2009-04-26 at 07:00
cheesehead is offline   Reply With Quote
Old 2009-04-26, 08:59   #198
Mr. P-1
 
Mr. P-1's Avatar
 
Jun 2003

7×167 Posts
Default

Quote:
Originally Posted by petrw1 View Post
More than one person mentioned a while back that doing a Test / Status ...
OR sending new Completion dates when there are more than 10 or so P-1 assignments takes a LONG time.

Any chance something can be done to address these?
I suspect the reason for this is that it takes quite a long time to calculate optimal P-1 bounds.
Mr. P-1 is offline   Reply With Quote
Reply



All times are UTC. The time now is 08:20.


Mon Aug 2 08:20:43 UTC 2021 up 10 days, 2:49, 0 users, load averages: 2.13, 2.12, 1.80

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.