mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet

Reply
 
Thread Tools
Old 2009-07-15, 00:17   #232
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

22·3·641 Posts
Default

Quote:
Originally Posted by petrw1 View Post
In order for the project to declare an exponent to be "adequately" P1'd is there a minimum required B1 or B2 limit?
There's no standard for "adequate" P-1. There's no declaration of any exponent to be "adequately" P-1ed. There's only a server record of whether "P-1 has been done" is true or false.

The server has usually considered any P-1 enough to satisfy the "Has it been P-1ed?" question.

At least once, there was a cleaning-up of cases with ultra-low B1, such as [30,something], so that they were re-designated as not having had P-1 done.

Quote:
For example on the low end:
Exponent 59123023 B1/B2 = 2048 / 16384
Some folks have zipped through groups of exponents, doing P-1 to explicitly-specified low limits such as those. My guess is that they probably had not conducted any calculation to determine whether the probability of finding a factor with such low limits justified their efforts.

Also, they probably didn't realize that reporting unsuccessful runs with such low limits would, given the server's behavior, prevent someone else from being assigned to do P-1 to optimum limits calculated by prime95 or mprime, thus reducing GIMPS throughput.

Quote:
On the other extreme I see:
Exponent 33500153 B1/B2 = 11025000 / 882000000
Some folks are more curious than others, apparently. Those limits weren't choices by the prime95 optimizing algorithm.

Quote:
My assignments with 600-1200 RAM with exponents in the 50M range have B1 / B2 of about: 605000 / 16788750
I presume those are limits calculated by the prime95 optimizing algorithm, rather than being limits you explicitly specified (as those above were).

Quote:
In this extreme my B1/B2 are 147 and 1024 times the small end while the biggest is 18 and 52 times bigger than my B1/B2.
Those ratios are meaningless because the extremes were surely all explicitly specified, while yours came from the optimizing algorithm.

Last fiddled with by cheesehead on 2009-07-15 at 00:30
cheesehead is offline   Reply With Quote
Old 2009-07-15, 11:03   #233
garo
 
garo's Avatar
 
Aug 2002
Termonfeckin, IE

22·691 Posts
Default

As Kevin said in his deleted post - yes the chosen few can read deleted posts - "I wouldn't worry about other people's B1 and B2 limits and trust Prime95 to do its thing as long as you have at least 300MB available for P-1".

I once wrote a long monograph on P-1 for Seventeen or Bust but the basic principles are applicable in GIMPS too.
http://www.sslug.dk/~grove/sbfactor/choosing_bounds.html

Prime95 has a sophisticated algorithm to compute the P-1 bounds that look at the exponent size, the number of tests saved if a factor is found and available memory. The last factor has the least influence as long as it is above a minimum.

Note also that once P-1 has been done to "lower than optimal" limits, it is typically not worth redoing it to optimal limits as the additional chance of finding a factor is not enough to justify redoing the P-1. However, absurdly small limits such as B1=2000,B2=20000 do make a retest justifiable.

If you are interested in redoing P-1 for exponents where you think the bounds were not sufficient, I would look at the average bounds of surrounding exponents and then pick any exponent whose bounds were say 1/10 of the average. You can also look at the excellent calculator at: http://mersenne-aries.sili.net/prob.php to find the probability of finding a factor.

Last fiddled with by garo on 2009-07-16 at 09:57 Reason: Link to the mersennaries calculator
garo is offline   Reply With Quote
Old 2009-07-15, 15:15   #234
Primeinator
 
Primeinator's Avatar
 
"Kyle"
Feb 2005
Somewhere near M52..

3·5·61 Posts
Default

Quote:
Originally Posted by garo View Post
in this thread: http://www.mersenneforum.org/showthread.php?t=12156

Just a couple of additional comments.

The really low hanging fruit was picked by our forum admin several years ago.

GIMPS is desperately short of people to do P-1 at the leading edge of LL. A majority of exponents being LL tested are not getting project optimal P-1 testing. Would you consider doing this more project-critical work instead? I know the P-IV is slow but it should get a P-1 test out in 10 days.
So most people are electing to skip P-1 at the currently active testing range?

Last fiddled with by garo on 2009-07-16 at 10:19
Primeinator is offline   Reply With Quote
Old 2009-07-15, 15:50   #235
Brian-E
 
Brian-E's Avatar
 
"Brian"
Jul 2007
The Netherlands

63058 Posts
Default

Quote:
Originally Posted by garo View Post
GIMPS is desperately short of people to do P-1 at the leading edge of LL.
Any obvious reason then why my machine is never given P-1 tasks when I have the preferences set to "do what makes sense"? I get TF and LL-double checking only (and used to get LL first-time tests too until around the time the v5 server went live 8-9 months ago). It is a single core AMD Athlon 64 bit 3800+ machine running mprime on top of Linux.
I have limited the memory use by mprime to 250M because the machine is only switched on when I am using it and I want the bulk of my 1 gigabyte for my own use. But if I increased the memory allowance for mprime slightly, would this make the server give me the much-required P-1 work? Or is my machine unsuitable for it anyway?

Last fiddled with by Brian-E on 2009-07-15 at 15:50
Brian-E is offline   Reply With Quote
Old 2009-07-15, 16:08   #236
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

10000101010112 Posts
Default

Quote:
Originally Posted by Brian-E View Post
Any obvious reason then why my machine is never given P-1 tasks when I have the preferences set to "do what makes sense"? I get TF and LL-double checking only (and used to get LL first-time tests too until around the time the v5 server went live 8-9 months ago). It is a single core AMD Athlon 64 bit 3800+ machine running mprime on top of Linux.
I have limited the memory use by mprime to 250M because the machine is only switched on when I am using it and I want the bulk of my 1 gigabyte for my own use. But if I increased the memory allowance for mprime slightly, would this make the server give me the much-required P-1 work? Or is my machine unsuitable for it anyway?
I don't think P-1 is ever selected by default when everything is just set to "do what makes sense". (also, when you're only given TF and DC assignments, of course no P-1 will be done) You can choose to be assigned P-1 from mprime's menu (assuming it's got the same options as Prime95 25.11), http://www.mersenne.org/cpus/ (for just that CPU), or http://www.mersenne.org/worktype/ (for your whole account). Your machine might take some time to do it, but I think it's still more than suitable, especially since it can use a good amount of memory.
From Prime95's readme:
Code:
4)  Factor in the information below about minimum, reasonable, and
desirable memory amounts for some sample exponents.  If you choose a
value below the minimum, that is OK.  The program will simply skip
stage 2 of P-1 factoring.

    Exponent    Minimum        Reasonable    Desirable
    --------    -------        ----------    ---------
    20000000     40MB           80MB         120MB
    33000000     65MB          125MB         185MB
    50000000     85MB          170MB         250MB
So your amount of memory is on the highest tier for even up to 50M exponents.

Last fiddled with by Mini-Geek on 2009-07-15 at 16:18
Mini-Geek is offline   Reply With Quote
Old 2009-07-15, 16:25   #237
Brian-E
 
Brian-E's Avatar
 
"Brian"
Jul 2007
The Netherlands

7·467 Posts
Default

Yes, I originally set it to 250M (2 years ago) on the basis of that file. But I read in this very recent posting from garo that 300M may be required.
Also if P-1 is never given if you select "do what makes sense", I must ask why. I deliberately chose that preference because I want to do whatever the project most requires. If I should really re-set that to a preference for P-1 then I will do so, but I find that strange.
Thanks for your reply.
Brian-E is offline   Reply With Quote
Old 2009-07-15, 16:56   #238
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

17×251 Posts
Default

That seems quite odd to me, too. Maybe I'm mistaken, I'll look it up at PrimeNet.
http://www.mersenne.org/thresholds/
Quote:
Thresholds for P-1 factoring assignments
Required memory for P-1 assignments 300 MB
I'm not 100% sure how to interpret this, but obviously the PrimeNet server prefers that anything being assigned P-1 has at least 300 MB available. I'm not sure if, for you and your machine, it will automatically assign P-1 if you allow 300 MB or not. In any case, if you want you can choose P-1 specifically, either with 250 MB or 300 MB. I don't know if the extra 50 MB is worth the extra chance to you. Here are a few stats: ("page" refers to stats gathered from http://mersenne-aries.sili.net/prob.php, which was linked in the post you linked, using the B1 and B2 values from Prime95)
Code:
8MB:
Prime95: 3.57%
page:
M50766601, factored to 68 bits, with B1=895,000 and B2=895,000
Probability = 4.11268%
Should take about 2.75 GHz-days

250MB:
Prime95: 5.98%
page:
M50766601, factored to 68 bits, with B1=575,000 and B2=9,343,750
Probability = 6.13057%
Should take about 3.24 GHz-days

300MB:
Prime95: 6.19%
page:
M50766601, factored to 68 bits, with B1=590,000 and B2=11,062,500
Probability = 6.36555%
Should take about 3.57 GHz-days

Last fiddled with by Mini-Geek on 2009-07-15 at 16:58
Mini-Geek is offline   Reply With Quote
Old 2009-07-15, 17:01   #239
Prime95
P90 years forever!
 
Prime95's Avatar
 
Aug 2002
Yeehaw, FL

2×53×71 Posts
Default

IMO, if you enjoy finding factors do P-1 on small exponents. Choose B2 as roughly 20*B1 so that it spends an equal amount of time in stage 1 and stage 2.

I'd say that P-1 and double-checking are both short-handed. TF definitely has too many CPUs.

"Do what makes the most sense" allows me to change the server's rules for handing out assignments, which I may do someday. At present, I'm inclined to let first-time LL testers do the P-1 testing that those dedicated solely to P-1 don't get to. Yeah, the LL testers may not have enough memory to run stage 2, so we won't find quite as many factors. Another choice would be to divert "do what makes the most sense" machines with lots of memory to P-1 half-time or full-time -- and I think P-1 would still fall behind the LL testers. I'd probably define "lots of memory" as 400 or 500 MB/core.
Prime95 is offline   Reply With Quote
Old 2009-07-15, 17:42   #240
petrw1
1976 Toyota Corona years forever!
 
petrw1's Avatar
 
"Wayne"
Nov 2006
Saskatchewan, Canada

22×7×167 Posts
Default According to some research P1 is NOT keeping up...

at first blush it appears that P-1 is keeping up with LL ... but it is NOT.

From April 27 - July 8 I simply counted ALL LL and P-1 Attempts:

Code:
	
          P1	   LL
27-Apr	 56,573   792,600 
8-Jul	 97,712   815,864 
Diff.	 41,139   23,264 
P1 - LL	 17,875
This gives the impression that P-1 is way ahead ... HOWEVER ...

There are at least a couple people doing P1-S (small) that account for in my estimation at least 27,000 attempts. I am counting those people whose Points Per Attempt is significantly below the expected:

There are over 27,000 attempts for people averaging less than 0.25 points per attempt; 1 person has 20,568.

And, yes some of the LL are in the low-end clean-up range too (25-33M) but only a couple thousand in that same time period.

Some of these 27,000 might be P1 to very low B1/B2 but the analysis suggests that, by far, the majority of these are P1 on small exponents.

SO ... THE PROJECT STILL NEEDS MORE P-1 IN THE CURRENT FIRST TIME LL RANGE.


Last fiddled with by petrw1 on 2009-07-15 at 17:46
petrw1 is offline   Reply With Quote
Old 2009-07-15, 23:44   #241
markr
 
markr's Avatar
 
"Mark"
Feb 2003
Sydney

3·191 Posts
Default

Quote:
Originally Posted by petrw1 View Post
There are at least a couple people doing P1-S (small) that account for in my estimation at least 27,000 attempts.
My small efforts account for < 1% of those 27,000 PM1-S in that period. (And about a dozen PM1-L, no factors yet for them but I'll keep trying. )
markr is offline   Reply With Quote
Old 2009-07-16, 03:28   #242
Primeinator
 
Primeinator's Avatar
 
"Kyle"
Feb 2005
Somewhere near M52..

3·5·61 Posts
Default

Quote:
I'd say that P-1 and double-checking are both short-handed. TF definitely has too many CPUs.
Is this not desirable, though? Looking at the overall status (exponents to one billion), it is apparent that hundreds of thousands of potential candidates have been factored (those far above 50M even), presumably by trial factoring though I may be mistaken on this.
Primeinator is offline   Reply With Quote
Reply



All times are UTC. The time now is 14:09.


Fri Jul 16 14:09:24 UTC 2021 up 49 days, 11:56, 2 users, load averages: 1.54, 1.86, 1.73

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.