![]() |
|
|
#397 | |
|
Jun 2003
7×167 Posts |
Quote:
|
|
|
|
|
|
|
#398 | ||||
|
"James Heinrich"
May 2004
ex-Northern Ontario
7·13·47 Posts |
Quote:
Quote:
Is it reasonable to interpret that enough allocated RAM to process 8 relative primes would be a reasonable lower limit; 48 relative primes a "generous" amount, and anything above 480 relative primes useless (except perhaps for Suyama's trick)? |
||||
|
|
|
|
|
#399 | |
|
Jun 2003
100100100012 Posts |
Quote:
|
|
|
|
|
|
|
#400 | |
|
"James Heinrich"
May 2004
ex-Northern Ontario
7×13×47 Posts |
Quote:
http://mersenne-aries.sili.net/p1small.php |
|
|
|
|
|
|
#401 |
|
Aug 2002
Termonfeckin, IE
24·173 Posts |
So... Most of the exponents in the 52M range have not been P-1 tested and are being handed out for LLs without P-1. Some will undoubtedly get a good P-1, some will get a stage-1 only and some will get nothing! Does it make sense for some of us to do a quickie P-1 on exponents in this range? By quickie, I mean setting the number of LL tests saved to 1, which seems to give a B1 of about half and B2 of about a third and reduces the chance of finding an exponent from 6.8% to 4.5%. The disadvantage is that it will take away resources from doing a "proper" P-1 on exponents in the 53M range that I am currently testing. Thoughts?
|
|
|
|
|
|
#402 |
|
"James Heinrich"
May 2004
ex-Northern Ontario
7·13·47 Posts |
Assuming we had the computing power (which I suspect we don't) to do a "quickie" P-1 on all assignments before handing out for LL first-time, we'd do a "poor" job of P-1 on everything, as opposed to doing a "good" job on some, "great" job on a few, "ok" job on many, and "poor" job on some. I suspect that the current balance of "good/great" vs "poor" will net out to "ok" overall. The disadvantage of everything being just "ok" as opposed to some-good + some-bad is that it's not practical to go back (years from now) and re-do only the few "bad" P-1 to higher standards. Right now it's pretty easy to pick out the bad ones and re-do them.
My vote is to not do a half-job on P-1. Leave the status quo. If possible, get George to focus more "let PrimeNet decide" clients on P-1 work, at least those that have a good amount of RAM allocated. |
|
|
|
|
|
#403 |
|
"GIMFS"
Sep 2002
Oeiras, Portugal
2·7·113 Posts |
|
|
|
|
|
|
#404 |
|
Jun 2003
22218 Posts |
garo is right in principle, though perhaps wrong in his choice to reduce the number of LLs saved to 1.
The optimisation problem faced by the machine that takes unP-1ed LL tests is to maximise GIMPS throughput which it does by minimising the expected time that will be spent on that exponent. This is optimised when d/dt (t - 2*p*L) = 0, where L is the time taken by an LL tests, t is the time spent doing P-1 and p is the probability of sucess as a function of that time. A dedicated P-1 machine has a quite different optimisation problem. Its task is maximise GIMPS throughput which it does by maximising the ratio of (amount of time saved on other machines) to (amount of time spent by me). Time saved on other machines equals the time that would otherwise be spent on the LL machine doing P-1 if I don't do it, plus the cost of 2LLs multiplied by (my probability of finding a factor - LL machine's probability of finding a factor). The ratio is maximised when d/dt ((tL + 2 * L * (p - pL)) / t) = 0, where tL is the time the LL machine would spend doing P-1 if I don't, and pL is the probability of success, and the other parameters as before. Note that tL and pL do not depend upon t, which is the amount of time I spend on the exponent. We don't know the actual value of these parameters but it ought to be possible to work out an average for each. The 1/t in that formula is because the number of different LL machines we can save time on is inversely proportional to the amount of time we spend on each P-1. It's worth spending less time on each P-1 in order to be able to do more of them. Another way to look at it is: A few excellent P-1s (done by me) + some excellent + some very good + some good + some average + some poor + some minimal + many no stage 2 at all. isn't as good as More very good P-1s (done by me) + fewer excellent + fewer very good + fewer good + fewer average + fewer poor + fewer minimal + fewer no stage 2 at all. Quantifying this analysis is difficult, and depends very much upon how good a P-1 an LL machine typically does. My intuition is that setting the number of LLs save to 1 is overdoing it. |
|
|
|
|
|
#405 |
|
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
3×52×71 Posts |
Here's the first third with the lowest B1/B2
Code:
Exponent Bits B1 B2 58021549 69 30 30 58021583 69 30 30 58021589 69 30 30 58021673 69 50 50 58021741 69 50 50 58021751 69 50 50 58021807 69 50 50 58021891 69 50 50 58021907 69 50 50 58021339 69 100 1000 58020533 69 100 10000 58020601 69 100 10000 58020629 69 100 10000 58020649 69 100 10000 58020691 69 100 10000 58020713 69 100 10000 58020811 69 100 10000 58020869 69 100 10000 58021001 69 200 20000 58021049 69 200 20000 58021189 69 200 20000 58021207 69 200 20000 58021237 69 200 20000 58021261 69 200 20000 58021267 69 200 20000 58021279 69 200 20000 58021283 69 200 20000 58021333 69 200 20000 58020047 69 500 50000 58020059 69 500 50000 58020223 69 500 50000 58020379 69 500 50000 58020409 69 500 50000 |
|
|
|
|
|
#406 | |
|
"James Heinrich"
May 2004
ex-Northern Ontario
7·13·47 Posts |
Quote:
Would it make sense to put a filter in place at the PrimeNet level (as opposed to in the client(s)) to NOT accept any P-1 work that's done to ridiculously-low bounds (e.g. probability < 0.1%). Not only not give credit for it, but not accept it as having had P-1 done. Of course, if by some miracle that minuscule P-1 attempt did find a factor then accept that, but otherwise reject all worse-than-nothing P-1 results, so they can be re-assigned to someone who will do at least a half-decent job. I've grabbed the first 21 exponents (up to the first 200/20000), should have them done in a day or so (split between 7 cores). |
|
|
|
|
|
|
#407 | |
|
"Mark"
Feb 2003
Sydney
10758 Posts |
There are only eight others in 58M with poor P-1, so here they are:
58020041,69,10000,1000000 58020169,69,10000,1000000 58020199,69,6000,600000 58020301,68,500,50000 58020427,69,1000,100000 58020709,68,100,10000 58021409,68,100,10000 58021451,69,100000,100000 When I last checked, back in December, these had poor P-1 also: 59123023,70,2048,16384 61020061,69,1000,100000 and then nothing until a bunch in 100M. Quote:
48090437,69,2500,5000: I picked this up as an LL test, to do a better P-1, and the manual assignment page gave something like this (note the final 0): Test=D7BCCED3EF0FE09A9C22F3DBDEADBEEF,48090437,69,0 55500031,68,500,500: The manual assignment page actually allowed this as P-1: PFactor=CA82AA7D593476BB9F28375EDEADBEEF,1,2,55500031,-1,68,2 IIRC, nothing like those two cases happened for my P-1 in 58M (58021973 to 58022929, 27 tests, 3 factors) but then I only tried assigning a couple of them. |
|
|
|
|