![]() |
Does the LL test:s factorization save or waste CPU time?
Hi, when I run an LL test on an ~16M exponent, the times needed on my home PC are these, roughly:
Factoring up to 66 bits: a bit more than ½ day (maybe something like 14-16 hours) LL testing: about 10 days (or maybe 11.5 or whatever). And it says chance to find a factor: 5%. Now, given the 5% is a good estimate, the factoring seem to me like well spent time: It would save about 5% * ~22days (The double check included), = roughly 1 day, i.e. more than time spent. However, I wonder if the stated ~5% estimate is much too optimistic, for two main reasons: 1. Theoretically: If the exponent I'm working with already been factored to 64 bits, and my factoring only would find a factor of 65 or 66 bits, the chance would be way below 5%, since the pre factoring work already drained out about 64/66 of that chance (and I have only like 2/66 * 5% = very small chance left? ) 2. Practically: I have been browsing trough the cleared exponents file. I didn't do this too systematically. But for my eye it looks like remarkably few recent successful factorings, especially in the 65-66 bit range. Maybe 1 out of 100-200 or something like that, need a closer count for a more accurate figure. I also browsed the recently finished 33M exponents and noticed these particular exponents seem to brake this pattern. Seems to be a good part (like 10-20 or so out of 100) of recent succesfull factorizations in connection with the LL test up here. I sence the relatively succesful factorisation up here are meerely "thanks" to lack of pre-factoring? 3. Just a curiosity ( ? ) After my 20+ LL tests, not factor was found yet. :( (I know, this doesn't tell us much at all! But I'm usually lucky in lotteries. 8) ) Can you please tell me if my speculations are incorrect? Actually I hope so, but then where did I miss out? On the other hand, if it would be true that "5% is much too optimistic": Then I think the overall troughput could benefit some juicy %:s by drastically changing Prime95:s factoring criterias. Maybe just simply having the currently hidden feature "SkipTrialFactoring" activated by default. :question: |
I am almost certain that you are talking about P-1 factoring, as 16M exponents are only trial factored up to 65 bits; the status will show 66 after completing the P-1, but it could find a factor of 80 bits or more (if it finds one at all). IIRC, the server actually stores 65.5 to flag it as P-1 completed, but that displays as 66 on the reports.
...But I have been wondering the same thing. I have P4s that take a day and a half to do the P-1 test, then just 9 days for the LL test. Perhaps that's because I give it so much memory to use. I have slower machines that only spend a few hours on P-1s but with much less memory allocated. |
Barring a bug in the code 5% is accurate. The estimation code does take into account how much trial factoring was done. Just for fun, you can change the 64 in "Test=exponent,64,0" and see how the bounds and percentages change.
I've been lucky in factoring (I'll trade you for your lottery luck). I don't have firm stats, but I think I find slightly more than my fair share. TPR recently did P-1 on a big range. I wonder if they kept stats on how many factors were found. |
[quote="binarydigits"] I have P4s that take a day and a half to do the P-1 test, then just 9 days for the LL test. Perhaps that's because I give it so much memory to use. I have slower machines that only spend a few hours on P-1s but with much less memory allocated.[/quote]
Make sure you haven't given prime95 so much memory that it is thrashing. Nothing could be worse than swapping these temporaries out to disk! |
TPR P-1 effort
We did P-1 on 1373 exponents and found 55 factors during P-1 for a success rate of 4.006%. A vast majority of the exponents were between 8.2M and 8.25M (factor 63 bits) with some over 8.25M that requires factoring to 64 bits.
|
I just recently P-1 factored around 100 or so exponents and I got close to 13 factors... But then I did another run of 100 and got 2... Most of the factors were found in P-1 stage 1, and not stage 2... For this test I let Prime95 use 920MB of memory and it actually took 800MB... I now have 16 "DF" factors found... :)
http://www.teamprimerib.com/rr1/usCL-Xyzzy.htm |
Re: Does the LL test:s factorization save or waste CPU time?
[quote="svempasnake"]
1. Theoretically: If the exponent I'm working with already been factored to 64 bits, and my factoring only would find a factor of 65 or 66 bits, the chance would be way below 5%, since the pre factoring work already drained out about 64/66 of that chance (and I have only like 2/66 * 5% = very small chance left? ) [/quote] I agree with the earlier replies, that you are probably talking about the P-1 factoring. But your paragraph about the trial factoring,quoted above, counts the work wrong. Each bit range has twice as many numbers as the previous range and about the same as all the previous ranges added together. Thus if you do 65 and 66 bits, the previous prefactoring work (1-64 bits) has only 'drained out' 1/4 of the chance, not 64/66. (Actually, this isn't completely accurate either, since the density of prime numbers decreases with size) |
Re: Does the LL test:s factorization save or waste CPU time?
[quote="roy1942"]Thus if you do 65 and 66 bits, the previous prefactoring work (1-64 bits) has only 'drained out' 1/4 of the chance, not 64/66.[/quote]
Actually, the original post was right on this point. Even though there are twice as many trial factors when going to the next bit level, the chance that they divide the Mersenne number decreases. Thus, it really is the case that factoring from 2^64 to 2^65 only has 1 chance in 65 of finding a factor. |
[quote="binarydigits"]status will show 66 after completing the P-1, but it could find a factor of 80 bits or more[/quote]Thanks for all replies and comments! So in summary, where I went wrong was when I read 66 too literally as "trial factored to 66 bits". Now I got a better clue about what P-1 actually does. Thanks again! /S-O
|
Re: Does the LL test:s factorization save or waste CPU time?
[quote="Prime95"][quote="roy1942"]Thus if you do 65 and 66 bits, the previous prefactoring work (1-64 bits) has only 'drained out' 1/4 of the chance, not 64/66.[/quote]
Actually, the original post was right on this point. Even though there are twice as many trial factors when going to the next bit level, the chance that they divide the Mersenne number decreases. Thus, it really is the case that factoring from 2^64 to 2^65 only has 1 chance in 65 of finding a factor.[/quote] A couple of years ago I trial-factored a large range of numbers to a depth of 2^57. This post prompted me to do a statistical analysis of the results files to see how well George's formula matches up with empirical data. Here are my results: [code:1] | Factoring range (2^n – 2^n+1) Mersenne number | 52-53 53-54 54-55 55-56 56-57 ==================|=========================================== |Candidates at start of factoring range 15470000-16000000 | 15671 15372 15075 14807 14559 17000000-18000000 | 21597 22300 24091 24197 27200 18000000-19000000 | 0 0 0 0 27276 19000000-20000000 | 0 0 0 0 26921 ------------------|------------------------------------------- Total | 37268 37672 39166 39004 95956 ------------------|------------------------------------------- | ==================|=========================================== |Candidates factored within factoring range 15470000-16000000 | 299 297 268 248 281 17000000-18000000 | 388 403 417 439 495 18000000-19000000 | 0 0 0 0 480 19000000-20000000 | 0 0 0 0 428 ------------------|------------------------------------------- Total | 687 700 685 687 1684 ------------------|------------------------------------------- | ==================|=========================================== |Percent of candidates factored within range 15470000-16000000 | 1.91% 1.93% 1.78% 1.67% 1.93% 17000000-18000000 | 1.80% 1.81% 1.73% 1.81% 1.82% 18000000-19000000 | 1.76% 19000000-20000000 | 1.59% ------------------|------------------------------------------- Total | 1.84% 1.86% 1.75% 1.76% 1.75% ------------------|------------------------------------------- Predicted* | 1.89% 1.85% 1.82% 1.79% 1.75% ------------------|------------------------------------------- *per George's 1/(n+1) formula [/code:1] (The increasing number of candidates in the 17M-18M range is because the numbers in this range had already been factored to varying depths below 2^57 before I started. All other ranges started with the numbers factored to the same depth: 2^52 for the 15.47M-16M range, and 2^56 for the 18M-20M range.) For this data set, George's formula looks pretty darn good. :D But there are some trends in the results that make me suspicious of generalizing the conclusion. In particular, the percentage of numbers factored within a range decreases as the M-number increases. But George's formula predicts that this percentage should be a constant. I suspect that if I had a similar data set in the vicinity of say 30-33M that George's formula would look much worse, at least in the 2^52 to 2^57 range. If I were really ambitious I'm sure I could piece together a much more thorough analysis by combining the data in the factors and nofactors files. Are there any enterprising souls out there who'd like to take on this challenge...? |
Before taking on any statistical analysis of the factored data, you should be aware that the number of bits in the found factors database is not always correct. I complained about this a couple of years ago but it did no good. And I just came across another obvious example: A recently completed P-1 test reported a "65-bit" factor of a 16M exponent. I thought that was strange as the exponent had already been TF'd to 66 bits. The factor was 40498974660725214481 which is more than 9% higher than 2^66. It appears that it is "rounding" the results, but this is the equivalent of saying that the decimal number 109 is a 2-digit number because it is only 9% higher than 10^2.
At any rate, if one were to consider this factor when counting the number of found factors between 2^65 and 2^66 then one would be wrong. I realize this one was found by P-1 and not by TF, but I am pretty sure it does the same thing with TF factors found, because back when I complained about it we were not doing P-1 tests. |
[quote="binarydigits"]Before taking on any statistical analysis of the factored data, you should be aware that the number of bits in the found factors database is not always correct. I complained about this a couple of years ago but it did no good. And I just came across another obvious example: A recently completed P-1 test reported a "65-bit" factor of a 16M exponent. I thought that was strange as the exponent had already been TF'd to 66 bits. The factor was 40498974660725214481 which is more than 9% higher than 2^66. It appears that it is "rounding" the results, but this is the equivalent of saying that the decimal number 109 is a 2-digit number because it is only 9% higher than 10^2.
At any rate, if one were to consider this factor when counting the number of found factors between 2^65 and 2^66 then one would be wrong. I realize this one was found by P-1 and not by TF, but I am pretty sure it does the same thing with TF factors found, because back when I complained about it we were not doing P-1 tests.[/quote] Yes, this error applies for TF. The error is that the lower limit for a factor to be considered N+1 bits is sqrt(2)*2^N instead of just 2^N. For example: 2^64 =(approx.) 1.84e19 Any factor less than: sqrt(2)*2^64 = 1.41 * 1.84e19 = 2.61e19 will still be listed as 64 bits. I bet if you searched through the Primenet logs, you would find that the largest "64-bit" factor found is just less than 2.61e19 and the smallest "65-bit" factor is just greater than 2.61e19. I have no idea what causes this error, I just know that Primenet consistently acts as if the bit limits are a factor of sqrt(2) larger than they should be. Note that credit for factors found is awarded based on the erroneous bit depth calculated by Primenet. |
[quote="NickGlover"]
Yes, this error applies for TF. The error is that the lower limit for a factor to be considered N+1 bits is sqrt(2)*2^N instead of just 2^N. For example: 2^64 =(approx.) 1.84e19 Any factor less than: sqrt(2)*2^64 = 1.41 * 1.84e19 = 2.61e19 will still be listed as 64 bits. I bet if you searched through the Primenet logs, you would find that the largest "64-bit" factor found is just less than 2.61e19 and the smallest "65-bit" factor is just greater than 2.61e19. [/quote] To me this suggests that the number reported for a factor f is round(log(f)). Even more interesting is a factor currently under discussion in the Team Prime Rib Perpetual Thread (part 3) in the forums at ars Technica: The factor 48091790614964383575080990595271931709835968599049593 of M16486579 was recently found. It was reported as: [code:1] 16486579 F 102 4809179061496438357508099059527 09-Sep-2002 18:38 horus2_02 horus2[/code:1] which as you can see has been rather truncated. Now, log(truncated version) is approx. 101.9 (supporting evidence for the round(log(f)) theory, BTW), but log(real number) is about 175! |
[quote="toferc"]Now, log(truncated version) is approx. 101.9 (supporting evidence for the round(log(f)) theory, BTW), but log(real number) is about 175![/quote]
IIRC, it has already been established that the Primenet server reports anything over 102 bits (or 101.5 bits as the case may be) as 102 bits; it can't handle anything larger than that, so it truncates. Or maybe it was 103 bits; I know it was in that vicinity. |
Here is a 103-bit example...
[code:1]16322953 103 F 7327094536555432498123426901521 21-Aug-02 02:04 berg C644F9511[/code:1] |
[quote="toferc"][quote="NickGlover"]
Yes, this error applies for TF. The error is that the lower limit for a factor to be considered N+1 bits is sqrt(2)*2^N instead of just 2^N. For example: 2^64 =(approx.) 1.84e19 Any factor less than: sqrt(2)*2^64 = 1.41 * 1.84e19 = 2.61e19 will still be listed as 64 bits. I bet if you searched through the Primenet logs, you would find that the largest "64-bit" factor found is just less than 2.61e19 and the smallest "65-bit" factor is just greater than 2.61e19. [/quote] To me this suggests that the number reported for a factor f is round(log(f)). Even more interesting is a factor currently under discussion in the Team Prime Rib Perpetual Thread (part 3) in the forums at ars Technica: The factor 48091790614964383575080990595271931709835968599049593 of M16486579 was recently found. It was reported as: [code:1] 16486579 F 102 4809179061496438357508099059527 09-Sep-2002 18:38 horus2_02 horus2[/code:1] which as you can see has been rather truncated. Now, log(truncated version) is approx. 101.9 (supporting evidence for the round(log(f)) theory, BTW), but log(real number) is about 175![/quote] Actually, we're both correct. Log2( sqrt(2) ) = 0.5, so for sqrt(2)*2^64, Log2( sqrt(2)*2^64 ) = 64.5, showing that my lower limit of sqrt(2)*2^64 for "65-bits" is the same as just taking the Log2( ) of the number and rounding. I assume the error is actually caused by taking the Log2( ) of the number and rounding. Oh, and binarydigits was probably noting the same Log2( ) rounding, I just didn't realize it was equivalent to my sqrt(2) error at the time of my previous posting. |
[quote="NickGlover"]
Actually, we're both correct. [/quote] I wasn't trying to imply that anybody was wrong, just providing an plausible explanation for the observations that you and binarydigits were making. [quote]Log2( sqrt(2) ) = 0.5, so for sqrt(2)*2^64, Log2( sqrt(2)*2^64 ) = 64.5, showing that my lower limit of sqrt(2)*2^64 for "65-bits" is the same as just taking the Log2( ) of the number and rounding. I assume the error is actually caused by taking the Log2( ) of the number and rounding.[/quote] I probably should have said that instead, it's more or less exactly what I meant. It was the fact that you mentioned the sqrt(2) factor which set off a light bulb. I thought maybe not everyone would notice this relationship and the connection with rounding, so I provided some reasonable-looking pseudocode in an attempt to make it more clear what was happening and (possibly) why. I guess I wasn't clear enough. |
[quote="NickGlover"] Log2( sqrt(2) ) = 0.5, so for sqrt(2)*2^64, Log2( sqrt(2)*2^64 ) = 64.5, showing that my lower limit of sqrt(2)*2^64 for "65-bits" is the same as just taking the Log2( ) of the number and rounding. I assume the error is actually caused by taking the Log2( ) of the number and rounding.
Oh, and binarydigits was probably noting the same Log2( ) rounding, I just didn't realize it was equivalent to my sqrt(2) error at the time of my previous posting.[/quote] Yes, all three of us noticed the same thing but described it differently - rounding #bits, rounding log2(), and subtracting sqr(2). Actually my description made the least sense; you can't round the #bits as it is always an integer. The point is that it is necessary to use the proper #factor bits in analyzing the data. Otherwise, when someone thought they were counting factors from 65 to 66 bits, they would actually be counting [65.5 <= log2(factor) <= 66.5]. This is a situation where rounding is just plain wrong. If log2(factor) is approximately equal to 65.00001, then it is a 66-bit factor. For that matter, 2^65 itself [log2()= exactly 65] is a 66-bit number: a 1 followed by 65 zeros. To get the #bits, truncate log2() and add 1 - do NOT round! |
[quote="binarydigits"]
This is a situation where rounding is just plain wrong. If log2(factor) is approximately equal to 65.00001, then it is a 66-bit factor. For that matter, 2^65 itself [log2()= exactly 65] is a 66-bit number: a 1 followed by 65 zeros. To get the #bits, truncate log2() and add 1 - do NOT round![/quote] While we're on this topic, I noticed a discrepancy with the credit I received when I turned in a factor between 2^64 and 2^64.5. If anyone else turns in a factor this size, could you tell me exponent, the factor, and the amount of credit you received ( using a private message or e-mail ). The way to get the amount of credit you received is to watch your account and see how much your factoring credit goes up when you turn in the factor ( which is kind of hard since you never know when you're going to find one ). It is helpful to use [url=http://www.teamprimerib.com/db1/p90.exe]TPR's factoring credit calculator[/url] to subtract off the credit for other factoring assignments you've turned in since the last time you looked at your factoring credit. However, don't use this to calculate the credit for the factor between 2^64 and 2^64.5 because I noticed the discrepancy when the calculator and the actual credit I received didn't seem to match. I only saw this happen once, so I'm a bit uncertain as to whether I made some error or maybe it was a fluke. Hopefully, someone else can help me refute or confirm whether there is any discrepancy so my perverse curiousity in knowing every detail about how the Primenet reports work can be satisfied. |
[quote="binarydigits"]The point is that it is necessary to use the proper #factor bits in analyzing the data. Otherwise, when someone thought they were counting factors from 65 to 66 bits, they would actually be counting [65.5 <= log2(factor) <= 66.5].[/quote]
FWIW, the ranges shown in my earlier post in this thread are correct. I calculated log2 of each factor in my results file, and paid no attention to how they ultimately showed up in the factors database. But if I do get around to more extensive trolling of the database, this is a useful tip. Thanks! |
[quote="binarydigits"]The factor was 40498974660725214481 which is more than 9% higher than 2^66.[/quote]
Not quiet :shock: : 2^66 is 7.4e19. The factor mentioned here is 4.0e19 or 65 bits. I just noticed this when looking if my implementation of bits_in_factor := Round_up (log2 (factor)) works as it should. FYI here is that implementation, its with MS-SQL: update factor_yes set bits_in_factor=cast((0.5+log(factor)/log(2)) as int) What surprised me is this implementation works fine also for long factors, For example M727:s which is 98 decimal digits or 323bits. (Geez, who found that one :) !?) MS-SQL has no numeric datatype that is supposed to handle that big numbers. So instead I just tried to use the general string datatype "varchar" for factor. Whoops, it works! :D :idea: |
I think you have just implemented round(), not round_up(). When casting a floating point to an int, MS-SQL truncates. You need to add 1, not 0.5.
This does illustrate how the original mistake could have been made, it's an easy one! The reason your code is working with huge numbers is that they are implicitly cast to a floating-point representation before the log operation. I checked both of these things out on MS SQL Server 7 and 2000 Although it is true that 2^65 < 4.0e19 < 2^66, numbers in this range are 66 bit numbers. |
[quote="toferc"]You need to add 1, not 0.5.[/quote]Dang! :surprised:ops: Youre right. Thougt I tripplechecked it on a bunch of numbers, but I must have been blind. With 1 it works. Thanks for seeing it!
[quote="toferc"]The reason your code is working with huge numbers is that they are implicitly cast to a floating-point representation before the log operation. [/quote]Ah! I tested now. Yup, it fails after 308 decimal digits, so this makes sence. |
[quote="svempasnake"][quote="binarydigits"]The factor was 40498974660725214481 which is more than 9% higher than 2^66.[/quote]
Not quiet :shock: : 2^66 is 7.4e19. The factor mentioned here is 4.0e19 or 65 bits.[/quote] Oops, I calculated it correctly but worded my message wrong. The factor is 9% higher than 2^65, which still makes it a 66-bit factor. Sorry for the confusion. |
Re: Does the LL test:s factorization save or waste CPU time?
[quote="dswanson"]...For this data set, George's formula looks pretty darn good. :D But there are some trends in the results that make me suspicious of generalizing the conclusion. In particular, the percentage of numbers factored within a range decreases as the M-number increases. But George's formula predicts that this percentage should be a constant. I suspect that if I had a similar data set in the vicinity of say 30-33M that George's formula would look much worse, at least in the 2^52 to 2^57 range.
If I were really ambitious I'm sure I could piece together a much more thorough analysis by combining the data in the factors and nofactors files. Are there any enterprising souls out there who'd like to take on this challenge...?[/quote] OK, this is a part of what I can get out of it: ------------------------------------------------------ Probability that TF up to n bits will find a factor, under assumption TF already is done up to n-1 bits. Based on historical data from factors & nofactors - Sept.08-2002[code:1] Fact bits: Exponent range Predicted 00M-79M 10M-15M 15M-20M 30M-35M n 1/n: ( =ALL) 50 2,00% 2,05% 1,99% 1,98% 2,03% 51 1,96% 2,01% 1,97% 1,95% 1,93% 52 1,92% 1,97% 1,90% 1,96% 1,96% 53 1,89% 1,89% 1,89% 1,90% 1,88% 54 1,85% 1,85% 1,80% 1,81% 1,86% 55 1,82% 1,80% 1,81% 1,78% 1,84% 56 1,79% 1,79% 1,77% 1,73% 1,76% [/code:1] When looking here, Georges 1/n formula seem to work very good. I guess the standard deviations are normal for the actual sample sizes. For reference, there are around 2800 factors behind the percentages at the 5M samples, and around 44000 per figure in the "ALL" collumn. I choosed to show up to 56 bits only, just because I then feel pretty safe that we look at "apples vs apples" rather than anything uncomparable. Nofactors states that every exponent up to 79M is trial factored to at least 56 bits. An assumtion I made about the factors table, is that when a factor is shown, it is supposed to be the smallest existing one. Anyone knowing if that's right? |
Re: Does the LL test:s factorization save or waste CPU time?
[quote="svempasnake"]An assumtion I made about the factors table, is that when a factor is shown, it is supposed to be the smallest existing one. Anyone knowing if that's right?[/quote]
There may be isolated cases where that is not the case, but rare enough to not affect your excellent analysis. |
Hey Svempasnake, great work! :D Looks like George is fully exonerated. The trend I saw must have a statistical fluke due to the limited size of my dataset.
I'm not convinced that it was necessary to exclude all numbers above 56 bits. As long as at each level you work only with numbers that have either had a factor found at that level or that have been trial factored through that level without finding a factor, the results for that level should be valid. The only problem is that the sample size gets smaller and smaller as you reach levels that have been less than fully trial-factored over the entire 0-79M range, making the results less statistically significant at each subsequent level. On second thought, P-1 factoring is also going to corrupt the results at higher levels. If a number has been trial factored to n bits but subsequent P-1 factoring finds a factor at n+3 bits, this new factor will be placed in the factors database with no indication that the range from n+1 to n+3 hasn't been trial factored. So I guess I've just talked myself back around to your assumption that the number in the factors table is indeed the smallest factor. And as more and more factors are found via P-1 factoring, the validity of this method of data-mining the factors database will become worse and worse. George, is a record kept anywhere of which factors were found using trial factoring versus using P-1? |
[quote="dswanson"]is a record kept anywhere of which factors were found using trial factoring versus using P-1?[/quote]
Nope |
Are only TF:s counted in nofactors?
[quote="Prime95"][quote="dswanson"]is a record kept anywhere of which factors were found using trial factoring versus using P-1?[/quote]
Nope[/quote]On the other hand, I believe we will know this anyway, most of the time! Even better, if nofactors is updated the way I hope and belive, the cases where we can't see if a factor was found with TF or P-1 should be very rare. Let's have a look at nofactors, which can help us inderectly:[code:1] All exponents 8.25M - 13.38M in nofactors (TF goes up to 64 bits) Factored to bits / Number of exponents 59 13 60 15 61 7 62 9 63 6472 64 113487 All exponents 13.38M - 17.85M in nofactors (TF goes up to 65 bits) Factored to bits / Number of exponents 57 21 58 59 59 149 60 56 61 30 62 6 64 49 65 102436 [/code:1]It looks to me as nofactors purely consist of TF:s, and P-1:s aren't updated here. Does anyone know if thats how it works :question: If that's the case, we can almost allways tell whether a factor was found by TF or P-1. Especially in exponent ranges where TF is as 99%-complete as above: Take binarydigits 4.0e19 factor as an example. First we count its bits carefully ;) and see it's a 66 bit factor. Since we know that TF ends at 65 bits for 16 M exponents, this factor must have been found trough P-1 and not trough TF. Another example: A 17.0 M exponent has a 65 bit factor. Assuming TF went up to 65 bits, this factor was found trough TF and not trough P-1. Third example: a 12 M exponent has a factor of 64 bits. Assuming TF went to 64 bits, this factor was also found trough TF. A table like the one above should help us knowing how good or bad such assumtions are. [quote="dswanson"]I'm not convinced that it was necessary to exclude all numbers above 56 bits.[/quote]Yeah, I now agree my 56 bit limitation was too conservative. I will make another post so you can see some more results. I will probably begin a new topic in the math group in that case. But first it would be nice to know for sure whether P-1 doesn't or does update nofactors. |
[quote="toferc"]Even more interesting is a factor currently under discussion in the Team Prime Rib Perpetual Thread (part 3) in the forums at ars Technica:
The factor 48091790614964383575080990595271931709835968599049593 of M16486579 was recently found. It was reported as: [code:1] 16486579 F 102 4809179061496438357508099059527 09-Sep-2002 18:38 horus2_02 horus2[/code:1] which as you can see has been rather truncated. [/quote]Sorry but I think I have to add some confusion here: I was curious to see wheter the original or the truncated one gets into factors.zip. To my surprise neither of these are here. Instead a smaller factor, 13329478389458153107343, is there! The only good explanation I can think of is that this smaller factor was found later. Would be fun if someone could check if this smaller factor actually divides M16486579. |
[quote="svempasnake"][quote="toferc"]Even more interesting is a factor currently under discussion in the Team Prime Rib Perpetual Thread (part 3) in the forums at ars Technica:
The factor 48091790614964383575080990595271931709835968599049593 of M16486579 was recently found. It was reported as: [...] [/quote]Sorry but I think I have to add some confusion here: I was curious to see wheter the original or the truncated one gets into factors.zip. To my surprise neither of these are here. Instead a smaller factor, 13329478389458153107343, is there! The only good explanation I can think of is that this smaller factor was found later. Would be fun if someone could check if this smaller factor actually divides M16486579.[/quote] The large factor is composite. 48091790614964383575080990595271931709835968599049593 = 13329478389458153107343 * 3607927422951418869297495045751 I, too, checked factors.zip to see if the composite factor made it there, but the smaller of the two prime factors is correcty listed. A p53 by P-1 would just have been too nice.. Alex |
[quote="svempasnake"][quote="toferc"]Even more interesting is a factor currently under discussion in the Team Prime Rib Perpetual Thread (part 3) in the forums at ars Technica:
The factor 48091790614964383575080990595271931709835968599049593 of M16486579 was recently found. It was reported as: [code:1] 16486579 F 102 4809179061496438357508099059527 09-Sep-2002 18:38 horus2_02 horus2[/code:1] which as you can see has been rather truncated. [/quote]Sorry but I think I have to add some confusion here: I was curious to see wheter the original or the truncated one gets into factors.zip. To my surprise neither of these are here. Instead a smaller factor, 13329478389458153107343, is there! The only good explanation I can think of is that this smaller factor was found later. Would be fun if someone could check if this smaller factor actually divides M16486579.[/quote] in some cases p-1 factoring will find factors that are not prime. So my guess is that this one was factored again and that the smaller factor was found. |
Re: Are only TF:s counted in nofactors?
[quote="svempasnake"]
Let's have a look at nofactors, which can help us inderectly:[code:1] All exponents 8.25M - 13.38M in nofactors (TF goes up to 64 bits) Factored to bits / Number of exponents 59 13 60 15 61 7 62 9 63 6472 64 113487 All exponents 13.38M - 17.85M in nofactors (TF goes up to 65 bits) Factored to bits / Number of exponents 57 21 58 59 59 149 60 56 61 30 62 6 64 49 65 102436 [/code:1] Ha! I might actually be worth it to do the factoring for atleast the exponents which have been factored to 57-63 bits only. The smaller bit factoring will go through pretty fast I imagine. I looked in the status.txt file and most of these exponents have been tested once and are not yet in range to be doublechecked. I wonder if the prime95 client reported incorrectly or if the folks who did the first test turned off trial factoring. Maybe George can tell us: [code:1] 9995243,60 9996509,60 9996673,60 9996703,60 9996859,60 12026057,60 12195593,60 12376337,60 12376363,60 12722923,60 12750373,60 13019759,60 12718319 12771757 12808949 12821153 12837259 12862439 12865469 12867941 12872239 12876709 12885793 12904979 13182061 [/code:1] I am wondering if it's worth getting a box to dedicate itself to factoring these exponents?? In the 13.38-17.85M range several of the exponents are still in the process of being factored so touching them at this time is not prudent but I am definitely going to try and factor the exponents listed above if anyone cannot give me a reason not to. Garo |
Also svempasnake there is a pminu1.zip file which keeps track of how much P-1 factoring has been done. The nofactor.zip only tells you how much TF has been done. And the factors.zip keeps track of the smallest factor foound whether it was through TF, or P-1 or by factoring a composite factor found in P-1.
But I think it is reasonable to assume that if the bits on a factor are significantly above the TF limit for that exponent then that factor was found through P-1. The converse is not true, ie. if the factor bits is below the TF range it could still have been found by P-1. Garo |
Those exponents only factored to 60 bits were probably LL tested by a Mac or Unix client. These programs do not include a trial factoring step before the LL test.
|
[quote="garo"]The nofactor.zip only tells you how much TF has been done. [/quote]Thank you for confirming this! Made me feel comfortable posting an extended table/diagram. It's in a new topic in the math group, this thread becoming too long now IMHO. Also the original question has been answered already.[quote]The converse is not true, ie. if the factor bits is below the TF range it could still have been found by P-1.
Garo[/quote]Yes, and my idéa was to estimate how often it was false. I look into nofactors to do this. Sorry if that wasn't clear. |
[quote="garo"]The converse is not true, ie. if the factor bits is below the TF range it could still have been found by P-1.[/quote]
That is correct, but it doesn't matter. If a factor below the TF upper bit range is found by P-1, then it could have (or would have or should have) been found (eventually) by TF. It will not affect the statistics at all. |
[quote="binarydigits"]
That is correct, but it doesn't matter. If a factor below the TF upper bit range is found by P-1, then it could have (or would have or should have) been found (eventually) by TF. It will not affect the statistics at all.[/quote] ... unless there was a yet smaller factor that would have been found by TF, had the P-1 not found the higher factor first. |
[code:1]
All exponents 8.25M - 13.38M in nofactors (TF goes up to 64 bits) Factored to bits / Number of exponents 59 13 60 15 61 7 62 9 [/code:1] So I am going ahead and factoring the 40 odd exponents mentioned above that are not currently assigned in the hope that finding a factor will save a double-check. However, since one LL test has already been performed in all these cases I think I will only factor to 63 bits as it is probably not worth factoring to 2^64 bits. Fingers crossed now :-) I'll be reporting the results through the primenet manual testing pages since that way the primenet server should not bork the fact bits in it's state - you never know - and the results will get to George as well. |
[quote="dswanson"]... unless there was a yet smaller factor that would have been found by TF, had the P-1 not found the higher factor first.[/quote]
You are, of course, correct. I was only thinking along the lines of it not affecting the stats on the bit level where it found the factor, and not on the lower one. Mea culpa. That reminds me of something else: older versions of the client (up through v18 IIRC) TF'd each of 16 threads all the way through the bit range and could (and sometimes did) find multiple factors. If it found a factor in one thread, it then tried the remaining threads up to that point to find a lower one. I remember this happening to me on at least one occasion, but I believe that although it reported both factors to the server, only the lower one showed up on my account report. That was years ago and back then I never checked to see whether they both showed up in George's database. I believe that the odds of finding a factor in a particular bit range are independent of whether or not there is a factor below that range. But that does not contradict your point that if enough smaller factors are "missed" then the statistics for that lower range will be slightly skewed. |
[code:1]9995243,60
9996509,60 9996673,60 9996703,60 9996859,60 12026057,60 12195593,60 12376337,60 12376363,60 12722923,60 12750373,60 13019759,60 [/code:1] Looks like someone has started crunching this list of exponents. Umm that's kind of not on. If you wanted to help out with this you could have asked me and we would have divided the factoring work. Instead we are duplicating work. Please either post here if you wish to divide the work sensibly or email me at annie@teamprimerib.com Thanks. |
toferc wrote (on Fri Sep 13, 2002 5:13 am, in the GIMPS forum The Software
-> Does the LL test:s factorization save or waste CPU time? ): [code:1] >Even more interesting is a factor currently under discussion in the >Team Prime Rib Perpetual Thread (part 3) in the forums at ars Technica: > >The factor 48091790614964383575080990595271931709835968599049593 of M16486579 was recently found. [/code:1] This 53-digit factor, if indeed a proper factor, would be among the top 5 factors ever found by non-NFS methods (ECM-found factors hold all the top spots in this list.) But it's not a proper factor, since it's easy to show that it's composite, and only a bit more difficult to show that it's the product of the two smaller prime factors: 13329478389458153107343 * 3607927422951418869297495045751 The smaller factor (call it p) is 74 bits, so wouldn't have been found in the sieving step that precedes the p-1 step. The smaller has p-1 = 2.293.887.3089.503551.16486579; the larger (call it q) has q-1 = 2.3.5^3.59.83.293.1571.136373.949213.16486579 . Both of these are very smooth, so it's not surprising that a p-1 run found them both. However, I was under the impression that the program checked whether found factors are composite or not, and should have flagged this one as composite. George? -Ernst |
Prime95 makes sure the factor divides the Mersenne number, but does not check that the factor is proper. I do that check later.
|
| All times are UTC. The time now is 16:47. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.