mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Software

Reply
 
Thread Tools
Old 2002-09-13, 03:43   #12
NickGlover
 
NickGlover's Avatar
 
Aug 2002
Richland, WA

22×3×11 Posts
Default

Quote:
Originally Posted by binarydigits
Before taking on any statistical analysis of the factored data, you should be aware that the number of bits in the found factors database is not always correct. I complained about this a couple of years ago but it did no good. And I just came across another obvious example: A recently completed P-1 test reported a "65-bit" factor of a 16M exponent. I thought that was strange as the exponent had already been TF'd to 66 bits. The factor was 40498974660725214481 which is more than 9% higher than 2^66. It appears that it is "rounding" the results, but this is the equivalent of saying that the decimal number 109 is a 2-digit number because it is only 9% higher than 10^2.

At any rate, if one were to consider this factor when counting the number of found factors between 2^65 and 2^66 then one would be wrong. I realize this one was found by P-1 and not by TF, but I am pretty sure it does the same thing with TF factors found, because back when I complained about it we were not doing P-1 tests.
Yes, this error applies for TF. The error is that the lower limit for a factor to be considered N+1 bits is sqrt(2)*2^N instead of just 2^N. For example:

2^64 =(approx.) 1.84e19

Any factor less than:

sqrt(2)*2^64 = 1.41 * 1.84e19 = 2.61e19

will still be listed as 64 bits. I bet if you searched through the Primenet logs, you would find that the largest "64-bit" factor found is just less than 2.61e19 and the smallest "65-bit" factor is just greater than 2.61e19.

I have no idea what causes this error, I just know that Primenet consistently acts as if the bit limits are a factor of sqrt(2) larger than they should be.

Note that credit for factors found is awarded based on the erroneous bit depth calculated by Primenet.
NickGlover is offline   Reply With Quote
Old 2002-09-13, 05:13   #13
toferc
 
Aug 2002

3010 Posts
Default

Quote:
Originally Posted by NickGlover
Yes, this error applies for TF. The error is that the lower limit for a factor to be considered N+1 bits is sqrt(2)*2^N instead of just 2^N. For example:

2^64 =(approx.) 1.84e19

Any factor less than:

sqrt(2)*2^64 = 1.41 * 1.84e19 = 2.61e19

will still be listed as 64 bits. I bet if you searched through the Primenet logs, you would find that the largest "64-bit" factor found is just less than 2.61e19 and the smallest "65-bit" factor is just greater than 2.61e19.
To me this suggests that the number reported for a factor f is round(log(f)).

Even more interesting is a factor currently under discussion in the Team Prime Rib Perpetual Thread (part 3) in the forums at ars Technica:

The factor 48091790614964383575080990595271931709835968599049593 of M16486579 was recently found. It was reported as:
[code:1]
16486579 F 102 4809179061496438357508099059527 09-Sep-2002 18:38 horus2_02 horus2[/code:1]
which as you can see has been rather truncated. Now, log(truncated version) is approx. 101.9 (supporting evidence for the round(log(f)) theory, BTW), but log(real number) is about 175!
toferc is offline   Reply With Quote
Old 2002-09-13, 10:25   #14
binarydigits
 
Aug 2002

22·13 Posts
Default

Quote:
Originally Posted by toferc
Now, log(truncated version) is approx. 101.9 (supporting evidence for the round(log(f)) theory, BTW), but log(real number) is about 175!
IIRC, it has already been established that the Primenet server reports anything over 102 bits (or 101.5 bits as the case may be) as 102 bits; it can't handle anything larger than that, so it truncates. Or maybe it was 103 bits; I know it was in that vicinity.
binarydigits is offline   Reply With Quote
Old 2002-09-13, 11:07   #15
Xyzzy
 
Xyzzy's Avatar
 
Aug 2002

2×32×13×37 Posts
Default

Here is a 103-bit example...

[code:1]16322953 103 F 7327094536555432498123426901521 21-Aug-02 02:04 berg C644F9511[/code:1]
Xyzzy is offline   Reply With Quote
Old 2002-09-13, 14:25   #16
NickGlover
 
NickGlover's Avatar
 
Aug 2002
Richland, WA

22×3×11 Posts
Default

Quote:
Originally Posted by toferc
Quote:
Originally Posted by NickGlover
Yes, this error applies for TF. The error is that the lower limit for a factor to be considered N+1 bits is sqrt(2)*2^N instead of just 2^N. For example:

2^64 =(approx.) 1.84e19

Any factor less than:

sqrt(2)*2^64 = 1.41 * 1.84e19 = 2.61e19

will still be listed as 64 bits. I bet if you searched through the Primenet logs, you would find that the largest "64-bit" factor found is just less than 2.61e19 and the smallest "65-bit" factor is just greater than 2.61e19.
To me this suggests that the number reported for a factor f is round(log(f)).

Even more interesting is a factor currently under discussion in the Team Prime Rib Perpetual Thread (part 3) in the forums at ars Technica:

The factor 48091790614964383575080990595271931709835968599049593 of M16486579 was recently found. It was reported as:
[code:1]
16486579 F 102 4809179061496438357508099059527 09-Sep-2002 18:38 horus2_02 horus2[/code:1]
which as you can see has been rather truncated. Now, log(truncated version) is approx. 101.9 (supporting evidence for the round(log(f)) theory, BTW), but log(real number) is about 175!
Actually, we're both correct.

Log2( sqrt(2) ) = 0.5, so for sqrt(2)*2^64, Log2( sqrt(2)*2^64 ) = 64.5, showing that my lower limit of sqrt(2)*2^64 for "65-bits" is the same as just taking the Log2( ) of the number and rounding. I assume the error is actually caused by taking the Log2( ) of the number and rounding.

Oh, and binarydigits was probably noting the same Log2( ) rounding, I just didn't realize it was equivalent to my sqrt(2) error at the time of my previous posting.
NickGlover is offline   Reply With Quote
Old 2002-09-14, 02:25   #17
toferc
 
Aug 2002

368 Posts
Default

Quote:
Originally Posted by NickGlover
Actually, we're both correct.
I wasn't trying to imply that anybody was wrong, just providing an plausible explanation for the observations that you and binarydigits were making.

Quote:
Log2( sqrt(2) ) = 0.5, so for sqrt(2)*2^64, Log2( sqrt(2)*2^64 ) = 64.5, showing that my lower limit of sqrt(2)*2^64 for "65-bits" is the same as just taking the Log2( ) of the number and rounding. I assume the error is actually caused by taking the Log2( ) of the number and rounding.
I probably should have said that instead, it's more or less exactly what I meant.
It was the fact that you mentioned the sqrt(2) factor which set off a light bulb. I thought maybe not everyone would notice this relationship and the connection with rounding, so I provided some reasonable-looking pseudocode in an attempt to make it more clear what was happening and (possibly) why. I guess I wasn't clear enough.
toferc is offline   Reply With Quote
Old 2002-09-14, 02:46   #18
binarydigits
 
Aug 2002

5210 Posts
Default

Quote:
Originally Posted by NickGlover
Log2( sqrt(2) ) = 0.5, so for sqrt(2)*2^64, Log2( sqrt(2)*2^64 ) = 64.5, showing that my lower limit of sqrt(2)*2^64 for "65-bits" is the same as just taking the Log2( ) of the number and rounding. I assume the error is actually caused by taking the Log2( ) of the number and rounding.

Oh, and binarydigits was probably noting the same Log2( ) rounding, I just didn't realize it was equivalent to my sqrt(2) error at the time of my previous posting.
Yes, all three of us noticed the same thing but described it differently - rounding #bits, rounding log2(), and subtracting sqr(2). Actually my description made the least sense; you can't round the #bits as it is always an integer.

The point is that it is necessary to use the proper #factor bits in analyzing the data. Otherwise, when someone thought they were counting factors from 65 to 66 bits, they would actually be counting [65.5 <= log2(factor) <= 66.5].

This is a situation where rounding is just plain wrong. If log2(factor) is approximately equal to 65.00001, then it is a 66-bit factor. For that matter, 2^65 itself [log2()= exactly 65] is a 66-bit number: a 1 followed by 65 zeros. To get the #bits, truncate log2() and add 1 - do NOT round!
binarydigits is offline   Reply With Quote
Old 2002-09-14, 04:19   #19
NickGlover
 
NickGlover's Avatar
 
Aug 2002
Richland, WA

22·3·11 Posts
Default

Quote:
Originally Posted by binarydigits
This is a situation where rounding is just plain wrong. If log2(factor) is approximately equal to 65.00001, then it is a 66-bit factor. For that matter, 2^65 itself [log2()= exactly 65] is a 66-bit number: a 1 followed by 65 zeros. To get the #bits, truncate log2() and add 1 - do NOT round!
While we're on this topic, I noticed a discrepancy with the credit I received when I turned in a factor between 2^64 and 2^64.5. If anyone else turns in a factor this size, could you tell me exponent, the factor, and the amount of credit you received ( using a private message or e-mail ).

The way to get the amount of credit you received is to watch your account and see how much your factoring credit goes up when you turn in the factor ( which is kind of hard since you never know when you're going to find one ). It is helpful to use TPR's factoring credit calculator to subtract off the credit for other factoring assignments you've turned in since the last time you looked at your factoring credit. However, don't use this to calculate the credit for the factor between 2^64 and 2^64.5 because I noticed the discrepancy when the calculator and the actual credit I received didn't seem to match.

I only saw this happen once, so I'm a bit uncertain as to whether I made some error or maybe it was a fluke. Hopefully, someone else can help me refute or confirm whether there is any discrepancy so my perverse curiousity in knowing every detail about how the Primenet reports work can be satisfied.
NickGlover is offline   Reply With Quote
Old 2002-09-14, 04:38   #20
dswanson
 
dswanson's Avatar
 
Aug 2002

23×52 Posts
Default

Quote:
Originally Posted by binarydigits
The point is that it is necessary to use the proper #factor bits in analyzing the data. Otherwise, when someone thought they were counting factors from 65 to 66 bits, they would actually be counting [65.5 <= log2(factor) <= 66.5].
FWIW, the ranges shown in my earlier post in this thread are correct. I calculated log2 of each factor in my results file, and paid no attention to how they ultimately showed up in the factors database. But if I do get around to more extensive trolling of the database, this is a useful tip. Thanks!
dswanson is offline   Reply With Quote
Old 2002-09-14, 09:37   #21
svempasnake
 
Aug 2002

3×7 Posts
Default

Quote:
Originally Posted by binarydigits
The factor was 40498974660725214481 which is more than 9% higher than 2^66.
Not quiet : 2^66 is 7.4e19. The factor mentioned here is 4.0e19 or 65 bits.

I just noticed this when looking if my implementation of
bits_in_factor := Round_up (log2 (factor))
works as it should.

FYI here is that implementation, its with MS-SQL:
update factor_yes
set bits_in_factor=cast((0.5+log(factor)/log(2)) as int)

What surprised me is this implementation works fine also for long factors, For example M727:s which is 98 decimal digits or 323bits. (Geez, who found that one :) !?) MS-SQL has no numeric datatype that is supposed to handle that big numbers. So instead I just tried to use the general string datatype "varchar" for factor. Whoops, it works! :D :idea:
svempasnake is offline   Reply With Quote
Old 2002-09-14, 13:29   #22
toferc
 
Aug 2002

2×3×5 Posts
Default

I think you have just implemented round(), not round_up(). When casting a floating point to an int, MS-SQL truncates. You need to add 1, not 0.5.

This does illustrate how the original mistake could have been made, it's an easy one!

The reason your code is working with huge numbers is that they are implicitly cast to a floating-point representation before the log operation.

I checked both of these things out on MS SQL Server 7 and 2000

Although it is true that 2^65 < 4.0e19 < 2^66, numbers in this range are 66 bit numbers.
toferc is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
Can I decrease the factorization time with this formula ? Godzilla Miscellaneous Math 53 2017-02-07 07:58
Denser matrices save BL time Batalov Msieve 8 2010-02-14 11:39
Another colossal waste of time? rogue Lounge 7 2007-11-13 23:28
results.txt - Prime95 didn't record time for factorization ixfd64 Software 1 2006-03-30 13:39
P-1 save files didn't save work outlnder Software 1 2003-01-19 23:01

All times are UTC. The time now is 13:56.


Fri Jul 7 13:56:07 UTC 2023 up 323 days, 11:24, 0 users, load averages: 0.98, 1.23, 1.18

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔