mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   PrimeNet (https://www.mersenneforum.org/forumdisplay.php?f=11)
-   -   P-1 factoring anyone? (https://www.mersenneforum.org/showthread.php?t=11101)

cheesehead 2009-07-15 00:17

[quote=petrw1;181026]In order for the project to declare an exponent to be "adequately" P1'd is there a minimum required B1 or B2 limit?[/quote]There's no standard for "adequate" P-1. There's no declaration of any exponent to be "adequately" P-1ed. There's only a server record of whether "P-1 has been done" is true or false.

The server has usually considered [I]any[/I] P-1 enough to satisfy the "Has it been P-1ed?" question.

At least once, there was a cleaning-up of cases with ultra-low B1, such as [30,something], so that they were re-designated as not having had P-1 done.

[quote]For example on the low end:
Exponent 59123023 B1/B2 = 2048 / 16384[/quote]Some folks have zipped through groups of exponents, doing P-1 to explicitly-specified low limits such as those. My guess is that they probably had not conducted any calculation to determine whether the probability of finding a factor with such low limits justified their efforts.

Also, they probably didn't realize that reporting unsuccessful runs with such low limits would, given the server's behavior, prevent someone else from being assigned to do P-1 to optimum limits calculated by prime95 or mprime, thus reducing GIMPS throughput.

[quote]On the other extreme I see:
Exponent 33500153 B1/B2 = 11025000 / 882000000[/quote]Some folks are more curious than others, apparently. Those limits weren't choices by the prime95 optimizing algorithm.

[quote]My assignments with 600-1200 RAM with exponents in the 50M range have B1 / B2 of about: 605000 / 16788750[/quote]I presume those are limits calculated by the prime95 optimizing algorithm, rather than being limits you explicitly specified (as those above were).

[quote]In this extreme my B1/B2 are 147 and 1024 times the small end while the biggest is 18 and 52 times bigger than my B1/B2.[/quote]Those ratios are meaningless because the extremes were surely all explicitly specified, while yours came from the optimizing algorithm.

garo 2009-07-15 11:03

As Kevin said in his deleted post - yes the chosen few can read deleted posts :smile: - "I wouldn't worry about other people's B1 and B2 limits and trust Prime95 to do its thing as long as you have at least 300MB available for P-1".

I once wrote a long monograph on P-1 for Seventeen or Bust but the basic principles are applicable in GIMPS too.
[URL="http://www.sslug.dk/%7Egrove/sbfactor/choosing_bounds.html"]http://www.sslug.dk/~grove/sbfactor/choosing_bounds.html[/URL]

Prime95 has a sophisticated algorithm to compute the P-1 bounds that look at the exponent size, the number of tests saved if a factor is found and available memory. The last factor has the least influence as long as it is above a minimum.

Note also that once P-1 has been done to "lower than optimal" limits, it is typically not worth redoing it to optimal limits as the additional chance of finding a factor is not enough to justify redoing the P-1. However, absurdly small limits such as B1=2000,B2=20000 do make a retest justifiable.

If you are interested in redoing P-1 for exponents where you think the bounds were not sufficient, I would look at the average bounds of surrounding exponents and then pick any exponent whose bounds were say 1/10 of the average. You can also look at the excellent calculator at: [URL]http://mersenne-aries.sili.net/prob.php[/URL] to find the probability of finding a factor.

Primeinator 2009-07-15 15:15

[quote=garo;181089] in this thread: [url]http://www.mersenneforum.org/showthread.php?t=12156[/url]

Just a couple of additional comments.

The really low hanging fruit was picked by our forum admin several years ago.

GIMPS is desperately short of people to do P-1 at the leading edge of LL. A majority of exponents being LL tested are not getting project optimal P-1 testing. Would you consider doing this more project-critical work instead? I know the P-IV is slow but it should get a P-1 test out in 10 days.[/quote]

So most people are electing to skip P-1 at the currently active testing range?

Brian-E 2009-07-15 15:50

[quote=garo;181089]GIMPS is desperately short of people to do P-1 at the leading edge of LL.[/quote]
Any obvious reason then why my machine is never given P-1 tasks when I have the preferences set to "do what makes sense"? I get TF and LL-double checking only (and used to get LL first-time tests too until around the time the v5 server went live 8-9 months ago). It is a single core AMD Athlon 64 bit 3800+ machine running mprime on top of Linux.
I have limited the memory use by mprime to 250M because the machine is only switched on when I am using it and I want the bulk of my 1 gigabyte for my own use. But if I increased the memory allowance for mprime slightly, would this make the server give me the much-required P-1 work? Or is my machine unsuitable for it anyway?

Mini-Geek 2009-07-15 16:08

[quote=Brian-E;181117]Any obvious reason then why my machine is never given P-1 tasks when I have the preferences set to "do what makes sense"? I get TF and LL-double checking only (and used to get LL first-time tests too until around the time the v5 server went live 8-9 months ago). It is a single core AMD Athlon 64 bit 3800+ machine running mprime on top of Linux.
I have limited the memory use by mprime to 250M because the machine is only switched on when I am using it and I want the bulk of my 1 gigabyte for my own use. But if I increased the memory allowance for mprime slightly, would this make the server give me the much-required P-1 work? Or is my machine unsuitable for it anyway?[/quote]
I don't think P-1 is ever selected by default when everything is just set to "do what makes sense". (also, when you're only given TF and DC assignments, of course no P-1 will be done) You can choose to be assigned P-1 from mprime's menu (assuming it's got the same options as Prime95 25.11), [URL]http://www.mersenne.org/cpus/[/URL] (for just that CPU), or [URL]http://www.mersenne.org/worktype/[/URL] (for your whole account). Your machine might take some time to do it, but I think it's still more than suitable, especially since it can use a good amount of memory.
From Prime95's readme:[code]4) Factor in the information below about minimum, reasonable, and
desirable memory amounts for some sample exponents. If you choose a
value below the minimum, that is OK. The program will simply skip
stage 2 of P-1 factoring.

Exponent Minimum Reasonable Desirable
-------- ------- ---------- ---------
20000000 40MB 80MB 120MB
33000000 65MB 125MB 185MB
50000000 85MB 170MB 250MB
[/code]So your amount of memory is on the highest tier for even up to 50M exponents.

Brian-E 2009-07-15 16:25

Yes, I originally set it to 250M (2 years ago) on the basis of that file. But I read in [URL="http://www.mersenneforum.org/showpost.php?p=181082&postcount=233"]this very recent posting[/URL] from garo that 300M may be required.
Also if P-1 is never given if you select "do what makes sense", I must ask why. I deliberately chose that preference because I want to do whatever the project most requires. If I should really re-set that to a preference for P-1 then I will do so, but I find that strange.
Thanks for your reply. :smile:

Mini-Geek 2009-07-15 16:56

That seems quite odd to me, too. Maybe I'm mistaken, I'll look it up at PrimeNet.
[URL]http://www.mersenne.org/thresholds/[/URL]
[quote][B]Thresholds for P-1 factoring assignments[/B]
Required memory for P-1 assignments 300 MB[/quote]I'm not 100% sure how to interpret this, but obviously the PrimeNet server prefers that anything being assigned P-1 has at least 300 MB available. I'm not sure if, for you and your machine, it will automatically assign P-1 if you allow 300 MB or not. In any case, if you want you can choose P-1 specifically, either with 250 MB or 300 MB. I don't know if the extra 50 MB is worth the extra chance to you. Here are a few stats: ("page" refers to stats gathered from [URL]http://mersenne-aries.sili.net/prob.php[/URL], which was linked in the post you linked, using the B1 and B2 values from Prime95)[code]8MB:
Prime95: 3.57%
page:
M50766601, factored to 68 bits, with B1=895,000 and B2=895,000
Probability = 4.11268%
Should take about 2.75 GHz-days

250MB:
Prime95: 5.98%
page:
M50766601, factored to 68 bits, with B1=575,000 and B2=9,343,750
Probability = 6.13057%
Should take about 3.24 GHz-days

300MB:
Prime95: 6.19%
page:
M50766601, factored to 68 bits, with B1=590,000 and B2=11,062,500
Probability = 6.36555%
Should take about 3.57 GHz-days[/code]

Prime95 2009-07-15 17:01

IMO, if you enjoy finding factors do P-1 on small exponents. Choose B2 as roughly 20*B1 so that it spends an equal amount of time in stage 1 and stage 2.

I'd say that P-1 and double-checking are both short-handed. TF definitely has too many CPUs.

"Do what makes the most sense" allows me to change the server's rules for handing out assignments, which I may do someday. At present, I'm inclined to let first-time LL testers do the P-1 testing that those dedicated solely to P-1 don't get to. Yeah, the LL testers may not have enough memory to run stage 2, so we won't find quite as many factors. Another choice would be to divert "do what makes the most sense" machines with lots of memory to P-1 half-time or full-time -- and I think P-1 would still fall behind the LL testers. I'd probably define "lots of memory" as 400 or 500 MB/core.

petrw1 2009-07-15 17:42

According to some research P1 is NOT keeping up...
 
at first blush it appears that P-1 is keeping up with LL ... but it is NOT.

From April 27 - July 8 I simply counted ALL LL and P-1 Attempts:

[CODE]
P1 LL
27-Apr 56,573 792,600
8-Jul 97,712 815,864
Diff. 41,139 23,264
P1 - LL 17,875 [/CODE]

This gives the impression that P-1 is way ahead ... HOWEVER ...

There are at least a couple people doing P1-S (small) that account for in my estimation at least 27,000 attempts. I am counting those people whose Points Per Attempt is significantly below the expected:

There are over 27,000 attempts for people averaging less than 0.25 points per attempt; 1 person has 20,568.

And, yes some of the LL are in the low-end clean-up range too (25-33M) but only a couple thousand in that same time period.

Some of these 27,000 might be P1 to very low B1/B2 but the analysis suggests that, by far, the majority of these are P1 on small exponents.

SO ... THE PROJECT STILL NEEDS MORE P-1 IN THE CURRENT FIRST TIME LL RANGE.

:batalov:

markr 2009-07-15 23:44

[QUOTE=petrw1;181141]There are at least a couple people doing P1-S (small) that account for in my estimation at least 27,000 attempts.[/QUOTE]
My small efforts account for < 1% of those 27,000 PM1-S in that period. (And about a dozen PM1-L, no factors yet for them but I'll keep trying. :smile:)

Primeinator 2009-07-16 03:28

[QUOTE]I'd say that P-1 and double-checking are both short-handed. TF definitely has too many CPUs. [/QUOTE]

Is this not desirable, though? Looking at the overall status (exponents to one billion), it is apparent that hundreds of thousands of potential candidates have been factored (those far above 50M even), presumably by trial factoring though I may be mistaken on this.


All times are UTC. The time now is 22:41.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.