![]() |
Also svempasnake there is a pminu1.zip file which keeps track of how much P-1 factoring has been done. The nofactor.zip only tells you how much TF has been done. And the factors.zip keeps track of the smallest factor foound whether it was through TF, or P-1 or by factoring a composite factor found in P-1.
But I think it is reasonable to assume that if the bits on a factor are significantly above the TF limit for that exponent then that factor was found through P-1. The converse is not true, ie. if the factor bits is below the TF range it could still have been found by P-1. Garo |
Those exponents only factored to 60 bits were probably LL tested by a Mac or Unix client. These programs do not include a trial factoring step before the LL test.
|
[quote="garo"]The nofactor.zip only tells you how much TF has been done. [/quote]Thank you for confirming this! Made me feel comfortable posting an extended table/diagram. It's in a new topic in the math group, this thread becoming too long now IMHO. Also the original question has been answered already.[quote]The converse is not true, ie. if the factor bits is below the TF range it could still have been found by P-1.
Garo[/quote]Yes, and my idéa was to estimate how often it was false. I look into nofactors to do this. Sorry if that wasn't clear. |
[quote="garo"]The converse is not true, ie. if the factor bits is below the TF range it could still have been found by P-1.[/quote]
That is correct, but it doesn't matter. If a factor below the TF upper bit range is found by P-1, then it could have (or would have or should have) been found (eventually) by TF. It will not affect the statistics at all. |
[quote="binarydigits"]
That is correct, but it doesn't matter. If a factor below the TF upper bit range is found by P-1, then it could have (or would have or should have) been found (eventually) by TF. It will not affect the statistics at all.[/quote] ... unless there was a yet smaller factor that would have been found by TF, had the P-1 not found the higher factor first. |
[code:1]
All exponents 8.25M - 13.38M in nofactors (TF goes up to 64 bits) Factored to bits / Number of exponents 59 13 60 15 61 7 62 9 [/code:1] So I am going ahead and factoring the 40 odd exponents mentioned above that are not currently assigned in the hope that finding a factor will save a double-check. However, since one LL test has already been performed in all these cases I think I will only factor to 63 bits as it is probably not worth factoring to 2^64 bits. Fingers crossed now :-) I'll be reporting the results through the primenet manual testing pages since that way the primenet server should not bork the fact bits in it's state - you never know - and the results will get to George as well. |
[quote="dswanson"]... unless there was a yet smaller factor that would have been found by TF, had the P-1 not found the higher factor first.[/quote]
You are, of course, correct. I was only thinking along the lines of it not affecting the stats on the bit level where it found the factor, and not on the lower one. Mea culpa. That reminds me of something else: older versions of the client (up through v18 IIRC) TF'd each of 16 threads all the way through the bit range and could (and sometimes did) find multiple factors. If it found a factor in one thread, it then tried the remaining threads up to that point to find a lower one. I remember this happening to me on at least one occasion, but I believe that although it reported both factors to the server, only the lower one showed up on my account report. That was years ago and back then I never checked to see whether they both showed up in George's database. I believe that the odds of finding a factor in a particular bit range are independent of whether or not there is a factor below that range. But that does not contradict your point that if enough smaller factors are "missed" then the statistics for that lower range will be slightly skewed. |
[code:1]9995243,60
9996509,60 9996673,60 9996703,60 9996859,60 12026057,60 12195593,60 12376337,60 12376363,60 12722923,60 12750373,60 13019759,60 [/code:1] Looks like someone has started crunching this list of exponents. Umm that's kind of not on. If you wanted to help out with this you could have asked me and we would have divided the factoring work. Instead we are duplicating work. Please either post here if you wish to divide the work sensibly or email me at annie@teamprimerib.com Thanks. |
toferc wrote (on Fri Sep 13, 2002 5:13 am, in the GIMPS forum The Software
-> Does the LL test:s factorization save or waste CPU time? ): [code:1] >Even more interesting is a factor currently under discussion in the >Team Prime Rib Perpetual Thread (part 3) in the forums at ars Technica: > >The factor 48091790614964383575080990595271931709835968599049593 of M16486579 was recently found. [/code:1] This 53-digit factor, if indeed a proper factor, would be among the top 5 factors ever found by non-NFS methods (ECM-found factors hold all the top spots in this list.) But it's not a proper factor, since it's easy to show that it's composite, and only a bit more difficult to show that it's the product of the two smaller prime factors: 13329478389458153107343 * 3607927422951418869297495045751 The smaller factor (call it p) is 74 bits, so wouldn't have been found in the sieving step that precedes the p-1 step. The smaller has p-1 = 2.293.887.3089.503551.16486579; the larger (call it q) has q-1 = 2.3.5^3.59.83.293.1571.136373.949213.16486579 . Both of these are very smooth, so it's not surprising that a p-1 run found them both. However, I was under the impression that the program checked whether found factors are composite or not, and should have flagged this one as composite. George? -Ernst |
Prime95 makes sure the factor divides the Mersenne number, but does not check that the factor is proper. I do that check later.
|
| All times are UTC. The time now is 16:47. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.