mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet > GPU to 72

Reply
 
Thread Tools
Old 2012-01-06, 06:51   #12
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

22·3·641 Posts
Default

The folks wanting to do stage 2 P-1 on those exponents which have had only stage 1 P-1 need first to do a cost/benefit analysis (not just WAG comments).

Suppose an exponent has B1=B2=500000 done, and you want to extend this to B1=500000,B2=15000000 (B2 = 30*B1). Figure out how much chance you have of finding a factor (A) with B1=B2=500000 and B) with B1=500000,B2=15000000. Realize that the benefit of extending the P-1 will NOT be (B), the chance of factor-finding with B1=500000,B2=15000000, but (B)-(A) the marginal change in probability between A) and B).

So, when extending the P-1 to B1=500000,B2=15000000, you'll be incurring the total cost of both the stage 1 and stage 2, but getting only the marginal benefit of the stage 2 alone because you know in advance that your stage 1 will not find a factor (else it would already have been found by the user who did the stage 1-only P-1 and you wouldn't be doing this extension !!).

Do the math. Is that P-1 extension really as efficient in L-L reduction as doing some other type of work would be?

- - -

Maximizing GIMPS progress efficiency is one, but not the only, choice of goal for your own satisfaction. I just want everyone to be sure that if they are not doing work that maximizes GIMPS progress efficiency, they are not kidding themselves and have made an informed choice. If you don't care about efficiency, and want to do P-1 extension just for the fun of it, fine!

Last fiddled with by cheesehead on 2012-01-06 at 06:59
cheesehead is offline   Reply With Quote
Old 2012-01-06, 07:18   #13
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

11110000011002 Posts
Default

Now that I found my bookmark for the Mersenne-aries calculator, here's an example:

exponent = 50M, TF to 68 bits

B1=B2=700000
chance of factor = 2.925%, cost = 1.859 GHz-day
(chance of factor = 2.925%)/(cost = 1.859 GHz-day) = 1.573% per GHz-day efficiency of factor-finding

B1=700000,B2=21000000 (B2=30*B1)
chance of factor = 6.396%, cost = 4.797 GHz-day

marginal increase in factor chance = 3.471% at cost of 4.797 GHz-day
(marginal increase in factor chance = 3.471%)/(cost of 4.797 GHz-day) = 0.724% per GHz-day efficiency of factor-finding

So, the extension is less than half as efficient in factor-finding as the original stage 1 run had been.

BTW, efficiency if the stage 2 had been done originally =
(chance of factor = 6.396%)/(cost = 4.797 GHz-day) = 1.333% per GHz-day efficiency of factor-finding

The B2 extension after a stage 1-only run is only slightly more than half as efficient in factor-finding as it would have been to do the stage 2 originally.

Last fiddled with by cheesehead on 2012-01-06 at 07:38
cheesehead is offline   Reply With Quote
Old 2012-01-06, 23:18   #14
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

My posts were not WAGs. I also used data from James' site, just like you. I came to the conclusion that if we get >3% success rate, then it is unquestionably beneficial to GIMPS. I used bcp19's data to guess that 3% is entirely doable, and then I agreed with chalsall that we need to figure out what the success rate would be. (My only caveat is that while >3% is unquestionably beneficial, it's still not as beneficial as 'fresh' P-1.)
Dubslow is offline   Reply With Quote
Old 2012-01-07, 05:11   #15
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

1E0C16 Posts
Default

Quote:
Originally Posted by Dubslow View Post
My posts were not WAGs. I also used data from James' site, just like you.
Then why didn't you tell us what before-and-after B1/B2 bounds, TF search limit, and exponents (are we to assume the same 48265319, or 48M, to which you referred earlier?) those percentages apply to? Where's the GHz-day cost?

Without those parameters, your percentages look exactly like WAGs.

>> Now, perhaps the parameters (which are not entirely clear -- If the first P-1 is B1=B2=750K, then what is the second, extended B2?) in your post #9 are what is associated with your 3% figure, but you need to tell us that rather than just slinging the 3% figure around, and you need to be a lot more careful in explaining what your post #9 figures mean. What, exactly, is "proper" P-1?

I see you mention a 1.9 GHz-day cost and a 3.1 GHz-day cost, but I don't see what you use the 3.1 figure for -- is that what the "3.2" is supposed to be?

Last fiddled with by cheesehead on 2012-01-07 at 05:47
cheesehead is offline   Reply With Quote
Old 2012-01-07, 05:25   #16
flashjh
 
flashjh's Avatar
 
"Jerry"
Nov 2011
Vancouver, WA

1,123 Posts
Default

Quote:
Originally Posted by chalsall View Post
It would literally take me an hour (but not tonight) to implement such a work type based on the code and database tables (and spiders) I already have implemented.

But I question if it makes (overall) sense for GIMPS to do so.

OTOH, as always, I'm just a facilitator. If people want to do this kind of work, I can assist.

And maybe we can try it, and observe the empirical results....
If we're keeping ahead of the P-1s already then I don't mind running extra P-1 to see if we get a decent average of factors - I can even test it out on a few hundred to see. Either way, is there anyway to get Prime95 to do just stage 2?
flashjh is offline   Reply With Quote
Old 2012-01-07, 05:30   #17
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

Quote:
Originally Posted by flashjh View Post
If we're keeping ahead of the P-1s already then I don't mind running extra P-1 to see if we get a decent average of factors - I can even test it out on a few hundred to see. Either way, is there anyway to get Prime95 to do just stage 2?
Unfortunately, no. (I asked this question a few months ago, and apparently it requires the entire Stage 1 save file.)

Getting back to you cheesehead.
Dubslow is offline   Reply With Quote
Old 2012-01-07, 05:40   #18
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

722110 Posts
Default

Quote:
Originally Posted by cheesehead View Post
Then why didn't you tell us what before-and-after B1/B2 bounds, TF search limit, and exponents (are we to assume the same 48265319, or 48M, to which you referred earlier?) those percentages apply to? Where's the GHz-day cost?

Without those parameters, your percentages look exactly like WAGs.
I can't take B* bounds a TF bounds and turn that into a GHz-Day requirement. That's why I used James' site to get how much work *should* be put into each exponent in that range. For the exponent I mentioned, he gives 3.0788 GHz-Days as the 'default' amount of P-1 to do; in my experience, assuming TFed to 72 bits, then Prime95 chooses somewhere around 2.4-2.5 GD of work (just looked up a bunch of my results to estimate that). It takes around 160 GD to do two tests (slightly more for 48M, actually), so with the higher 'default' work of 3.2 GD (again, being conservative), you can run 160/3.2 = 50 P-1s. To 'break even' so to speak, then we need to find one factor in those 50 tests, or a 2% success rate. We can run even more '72 bit' P-1 in that time, which requires a lower success rate, which can be accounted that each run has less probability of finding a factor. If we were to do 500 proper P-1s on these 'half-assed' exponents and found 20 factors, that would definitely be breaking even. That's what I meant when I said 'determine the success rate' -- I can't calculate chances based on the bounds already completed, so I (and chalsall) were just thinking to collect empirical data (e.g. those 500 expos).

You are right in that it does not compare how useful this would be instead of just 'regular' P-1; I do believe that that is even more beneficial, and that unless we run out of those (which ain't gonna happen anytime soon) then we could do these instead. I brought this up because there are people like James and bcp19 who like to be thorough, if not necessarily the best for GIMPS. I would consider mixing some of these in with some regular P-1 work, and it never hurts to have the option available for those who want to.
Dubslow is offline   Reply With Quote
Old 2012-01-07, 06:20   #19
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

170148 Posts
Default

Quote:
Originally Posted by Dubslow View Post
I can't take B* bounds a TF bounds and turn that into a GHz-Day requirement. That's why I used James' site to get how much work *should* be put into each exponent in that range. For the exponent I mentioned, he gives 3.0788 GHz-Days as the 'default' amount of P-1 to do
The 3.0788 is not so much the default amount of P-1 to do, as it is the cost of doing P-1 to the default B1/B2 bounds.

In prime95, the default B1/B2 bounds are calculated by an algorithm that computes the optimum balance between factor chance, GHz-day cost of P-1 and GHz-day cost for L-L. What it looks for, while varying B1 and B2, is the balance

(L-L cost) * (factor chance at a particular B1/B2) = (P-1 cost at that same B1/B2)

The output of this algorithm is a set of B1/B2 bounds. GHz-day cost is used within the algorithm, but is not the output "default amount".

James's site does offer an option for you to specify GHz-days, and then it back-calculates what B1/B2 limits will use that much time, but those are not default values.

Quote:
in my experience, assuming TFed to 72 bits, then Prime95 chooses somewhere around 2.4-2.5 GD of work
Actually, it chooses a B1/B2 combination. It could be that all or most of your P-1 runs happened to use 2.4-2.5 GHz-days, but someone with a different computer system with a different amount of allocated memory might have his P-1 runs use a different amount of GHz-days because a different set of B1/B2 would be optimal for his system.

Quote:
If we were to do 500 proper P-1s on these 'half-assed' exponents
What are your definitions of "proper" and 'half-assed'?

Quote:
I can't calculate chances based on the bounds already completed,
Yes, you can -- as James's page shows. In fact, if you don't do that calculation for the bounds already completed, you can't tell whether the amount of additional P-1 you propose doing is worthwhile or not.

.

Last fiddled with by cheesehead on 2012-01-07 at 06:24
cheesehead is offline   Reply With Quote
Old 2012-01-07, 06:27   #20
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

The reason I talk about 'default work' in GD as opposed to bounds is because James' site specifies a default set of bounds, independent of memory. I therefore assume that given sufficient memory (and the right TF bit depth), Prime95 will choose those bounds. If that's not true, then posting default bounds seems at least a little bit misleading.

And of course I meant that it chooses bounds that amount to 2.4 GD. I figured that everyone here knows this. (And if not, nothing is really lost in this context.)

'Half-assed' means no Stage 2, or B1=B2. That's what this whole thread is about. Proper means what Prime95 would determine as proper, given at least 500-1000MB, or equivalently, the bounds on James' site (ignoring the TF bound). This implies a decent Stage 2.

The best way to end this is to just do it and see what happens.
Dubslow is offline   Reply With Quote
Old 2012-01-07, 06:29   #21
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

1E0C16 Posts
Default

Quote:
Originally Posted by flashjh View Post
Either way, is there anyway to get Prime95 to do just stage 2?
Stage 2 requires having the value that stage 1 computes. So either you, or someone before you who created a savefile that you can continue from, have to do the stage 1 before the stage 2 can start.
cheesehead is offline   Reply With Quote
Old 2012-01-07, 06:44   #22
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

11110000011002 Posts
Default

Quote:
Originally Posted by Dubslow View Post
The reason I talk about 'default work' in GD as opposed to bounds is because James' site specifies a default set of bounds, independent of memory.
James's site is a simplification of the actual algorithm prime95 uses.

Calculating the chances of finding a factor with a given set of bounds does not require knowing the allocated memory. But calculating the GHz-day cost of computing with a given set of bounds does require knowing the allocated memory (for stage 2). James's page makes some assumption there.

Quote:
I therefore assume that given sufficient memory (and the right TF bit depth), Prime95 will choose those bounds.
Your assumption depends on what assumption James's page makes about allocated memory.

Quote:
If that's not true, then posting default bounds seems at least a little bit misleading.
Why? How is it misleading?

If you make unwarranted assumptions about the meaning of posted default bounds, that's not the fault of whoever posted the figures.

Quote:
And of course I meant that it chooses bounds that amount to 2.4 GD.
If "it" means James's page, okay. But that's not what the prime95 algorithm does. It chooses B1/B2 such that

(L-L cost) * (factor chance at a particular B1/B2) = (P-1 cost at that same B1/B2)

and the P-1 GHz-day cost is just whatever cost happens to apply to that set of bounds, not the other way around.

Quote:
'Half-assed' means no Stage 2, or B1=B2. That's what this whole thread is about.
That is screwed-up due to your misunderstanding.

Prime95 always chooses the optimum bounds for the particular situation of the particular user who is doing the P-1. Stage 1-only is a proper and optimum way to do P-1 on some systems with small allocated memory. It is not "half-assed". It is what was optimum for the conditions in which it was run. You need to understand that.

Quote:
Proper means what Prime95 would determine as proper, given at least 500-1000MB, or equivalently, the bounds on James' site (ignoring the TF bound). This implies a decent Stage 2.
If that's what you want, okay. But referring to stage 1-only P-1 as "half-assed" demeans the legitimate contributions of users who happen not to be able to allocate large amounts of memory to P-1.

If you want to extend P-1 stage 1-only to high B2 bounds (or higher B1!), that's fine, but you don't need to belittle the contributions of those who did the stage 1-only P-1 on systems that didn't have the generous amount of memory that yours does.

Last fiddled with by cheesehead on 2012-01-07 at 06:53
cheesehead is offline   Reply With Quote
Reply



All times are UTC. The time now is 14:29.


Fri Jul 16 14:29:10 UTC 2021 up 49 days, 12:16, 2 users, load averages: 1.88, 1.90, 1.80

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.