mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   PrimeNet (https://www.mersenneforum.org/forumdisplay.php?f=11)
-   -   P-1 factoring anyone? (https://www.mersenneforum.org/showthread.php?t=11101)

Mr. P-1 2011-01-21 16:03

[QUOTE=James Heinrich;247378]Here's my formal request to George to add a couple more filters to [url]http://www.mersenne.org/report_factoring_effort/[/url] (similar to the attached screenshot)[/QUOTE]

Seconded.

James Heinrich 2011-01-23 03:41

[QUOTE=Mr. P-1;154303]George suggested in [url=http://mersenneforum.org/showpost.php?p=87549&postcount=74]this post[/url] (20 Sep 06):[quote][quote]Prime95 processes from B1 to B2 in groups of 30 (2*3*5) or 210 (2*3*5*7) or 2310 (2*3*5*7*11). There are 8, 48, or 480 relative primes respectively. The bigger the group, the less fixed cost in counting from B1 to B2.[/quote]I suspect the "knee" is somewhere around 13 temporary variables. This lets prime95 run the 2*3*5 case in one pass over the 8 relative primes. So for a 2048K FFT you would need roughly 13 * 2048K * 8 (sizeof of a float) + 10M (rough guess of sin/cos and other needed data). That is 218MB. Do some real world tests to see if my suspicion is correct. [/quote] that enough memory for eight relative primes is where the "knee" is, i.e., the point at which the law of diminishing returns sets in as far as extra memory is concerned. However that was for a much earlier version of the client, so I don't know if this is still valid.[/QUOTE][QUOTE=Prime95;154315](20 Dec 08)
Yes, the stage 2 code has changed greatly since then.[/QUOTE]Sorry to dredge up a post from >2 years ago, but I don't think George ever explained his answer. In current versions, where is the "knee"?

Is it reasonable to interpret that enough allocated RAM to process 8 relative primes would be a reasonable lower limit; 48 relative primes a "generous" amount, and anything above 480 relative primes useless (except perhaps for Suyama's trick)?

Mr. P-1 2011-01-23 09:59

[QUOTE=James Heinrich;248613]Sorry to dredge up a post from >2 years ago, but I don't think George ever explained his answer. In current versions, where is the "knee"?

Is it reasonable to interpret that enough allocated RAM to process 8 relative primes would be a reasonable lower limit; 48 relative primes a "generous" amount, and anything above 480 relative primes useless (except perhaps for Suyama's trick)?[/QUOTE]

The code will use enough memory for 2400 relative primes if you give it enough. What benefit there is in having more than 480, I don't know.

James Heinrich 2011-01-25 19:13

[QUOTE=James Heinrich;247011]That readily show exponents in a certain range with specific TF limits, I was wondering if there was something similar where you could search by P-1 bounds?[/QUOTE]This isn't quite a search, but it does give you a list of exponents that have poorly-done P-1 factoring (the definition of "poor" is configurable, but defaults to <2.5% probability). Data is pulled live from PrimeNet (in 100k range chunks), and the output is filtered to only show poor exponents, in worktodo format, with the "worst" ones first.

[url]http://mersenne-aries.sili.net/p1small.php[/url]

garo 2011-01-30 20:14

So... Most of the exponents in the 52M range have not been P-1 tested and are being handed out for LLs without P-1. Some will undoubtedly get a good P-1, some will get a stage-1 only and some will get nothing! Does it make sense for some of us to do a quickie P-1 on exponents in this range? By quickie, I mean setting the number of LL tests saved to 1, which seems to give a B1 of about half and B2 of about a third and reduces the chance of finding an exponent from 6.8% to 4.5%. The disadvantage is that it will take away resources from doing a "proper" P-1 on exponents in the 53M range that I am currently testing. Thoughts?

James Heinrich 2011-01-30 22:50

Assuming we had the computing power (which I suspect we don't) to do a "quickie" P-1 on all assignments before handing out for LL first-time, we'd do a "poor" job of P-1 on everything, as opposed to doing a "good" job on some, "great" job on a few, "ok" job on many, and "poor" job on some. I suspect that the current balance of "good/great" vs "poor" will net out to "ok" overall. The [i]dis[/i]advantage of everything being just "ok" as opposed to some-good + some-bad is that it's not practical to go back (years from now) and re-do only the few "bad" P-1 to higher standards. Right now it's pretty easy to [url=http://mersenne-aries.sili.net/p1small.php?min=51800000]pick out the bad ones[/url] and re-do them.

My vote is to [i]not[/i] do a half-job on P-1. Leave the status quo. If possible, get George to focus more "let PrimeNet decide" clients on P-1 work, at least those that have a good amount of RAM allocated.

lycorn 2011-01-31 00:21

[QUOTE=James Heinrich;250531]
My vote is to [I]not[/I] do a half-job on P-1. Leave the status quo.[/QUOTE]

Seconded.

Mr. P-1 2011-01-31 23:20

garo is right in principle, though perhaps wrong in his choice to reduce the number of LLs saved to 1.

The optimisation problem faced by the machine that takes unP-1ed LL tests is to maximise GIMPS throughput which it does by minimising the expected time that will be spent on that exponent. This is optimised when d/dt (t - 2*p*L) = 0, where L is the time taken by an LL tests, t is the time spent doing P-1 and p is the probability of sucess as a function of that time.

A dedicated P-1 machine has a quite different optimisation problem. Its task is maximise GIMPS throughput which it does by maximising the ratio of (amount of time saved on other machines) to (amount of time spent by me).

Time saved on other machines equals the time that would otherwise be spent on the LL machine doing P-1 if I don't do it, plus the cost of 2LLs multiplied by (my probability of finding a factor - LL machine's probability of finding a factor).

The ratio is maximised when d/dt ((t[sub]L[/sub] + 2 * L * (p - p[sub]L[/sub])) / t) = 0, where t[sub]L[/sub] is the time the LL machine would spend doing P-1 if I don't, and p[sub]L[/sub] is the probability of success, and the other parameters as before. Note that t[sub]L[/sub] and p[sub]L[/sub] do not depend upon t, which is the amount of time [i]I[/i] spend on the exponent. We don't know the actual value of these parameters but it ought to be possible to work out an average for each.

The 1/t in that formula is because the number of different LL machines we can save time on is inversely proportional to the amount of time we spend on each P-1. It's worth spending less time on each P-1 in order to be able to do more of them.

Another way to look at it is:

A few excellent P-1s (done by me) + some excellent + some very good + some good + some average + some poor + some minimal + many no stage 2 at all.

isn't as good as

More very good P-1s (done by me) + fewer excellent + fewer very good + fewer good + fewer average + fewer poor + fewer minimal + fewer no stage 2 at all.

Quantifying this analysis is difficult, and depends very much upon how good a P-1 an LL machine typically does. My intuition is that setting the number of LLs save to 1 is overdoing it.

petrw1 2011-02-01 19:32

100 Exponents ready for LL in 58M Range...but
 
Here's the first third with the lowest B1/B2
[CODE]
Exponent Bits B1 B2
58021549 69 30 30
58021583 69 30 30
58021589 69 30 30
58021673 69 50 50
58021741 69 50 50
58021751 69 50 50
58021807 69 50 50
58021891 69 50 50
58021907 69 50 50
58021339 69 100 1000
58020533 69 100 10000
58020601 69 100 10000
58020629 69 100 10000
58020649 69 100 10000
58020691 69 100 10000
58020713 69 100 10000
58020811 69 100 10000
58020869 69 100 10000
58021001 69 200 20000
58021049 69 200 20000
58021189 69 200 20000
58021207 69 200 20000
58021237 69 200 20000
58021261 69 200 20000
58021267 69 200 20000
58021279 69 200 20000
58021283 69 200 20000
58021333 69 200 20000
58020047 69 500 50000
58020059 69 500 50000
58020223 69 500 50000
58020379 69 500 50000
58020409 69 500 50000
[/CODE]

couldn't there be a limit below which a P-1 is really NOT a P-1; expecially the first 9 above?

James Heinrich 2011-02-01 20:34

[QUOTE=petrw1;250840]couldn't there be a limit below which a P-1 is really NOT a P-1; expecially the first 9 above?[/QUOTE]Going down [url=http://mersenne-aries.sili.net/p1small.php?min=58000000]that list[/url], it's not until you get to 100/10000 that you get a non-zero probability ([url=http://mersenne-aries.sili.net/prob.php?exponent=58021549&b1=100&b2=10000&factorbits=&K=1&C=-1&submitbutton=Calculate]0.000778%[/url], which isn't [i]much[/i] above zero...)

Would it make sense to put a filter in place [i]at the PrimeNet level[/i] (as opposed to in the client(s)) to NOT accept any P-1 work that's done to ridiculously-low bounds (e.g. probability < 0.1%). Not only not give credit for it, but not accept it as having had P-1 done. Of course, if by some miracle that minuscule P-1 attempt did find a factor then accept that, but otherwise reject all worse-than-nothing P-1 results, so they can be re-assigned to someone who will do at least a half-decent job.

I've grabbed the first 21 exponents (up to the first 200/20000), should have them done in a day or so (split between 7 cores).

markr 2011-02-01 21:15

[QUOTE=petrw1;250840]Here's the first third with the lowest B1/B2[/QUOTE]
There are only eight others in 58M with poor P-1, so here they are:
58020041,69,10000,1000000
58020169,69,10000,1000000
58020199,69,6000,600000
58020301,68,500,50000
58020427,69,1000,100000
58020709,68,100,10000
58021409,68,100,10000
58021451,69,100000,100000

When I last checked, back in December, these had poor P-1 also:
59123023,70,2048,16384
61020061,69,1000,100000
and then nothing until a bunch in 100M.

[QUOTE]couldn't there be a limit below which a P-1 is really NOT a P-1; expecially the first 9 above?[/QUOTE]
I think PrimeNet might have some limits on what counts as P-1, but it may depend on the range. Examples:

48090437,69,2500,5000: I picked this up as an LL test, to do a better P-1, and the manual assignment page gave something like this (note the final 0):
Test=D7BCCED3EF0FE09A9C22F3DBDEADBEEF,48090437,69,0

55500031,68,500,500: The manual assignment page actually allowed this as P-1:
PFactor=CA82AA7D593476BB9F28375EDEADBEEF,1,2,55500031,-1,68,2

IIRC, nothing like those two cases happened for my P-1 in 58M (58021973 to 58022929, 27 tests, 3 factors) but then I only tried assigning a couple of them.


All times are UTC. The time now is 22:58.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.