mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   mersenne.ca (https://www.mersenneforum.org/forumdisplay.php?f=173)
-   -   mersenne.ca (https://www.mersenneforum.org/showthread.php?t=23051)

lycorn 2019-07-23 11:46

Correct.
Linking directly to Status pages gives no errors. I recall this happening a while ago.
Thx for your answer.

James Heinrich 2019-07-23 12:20

Sorry, one of the included files didn't upload completely. Should be working now.

lycorn 2019-07-23 14:24

:tu: :bow:

LaurV 2019-07-24 08:14

Re: log for large numbers, you do not actually need a log, but just to see the largest power of 2 that fits the number, and for that a binary search (squaring, shifting) is very fast. However, for the log side, there are other people who seem to have had this issue years ago, see [URL="https://www.php.net/manual/en/function.log.php"]this link[/URL], look for the comment of the user called dingus, he implements a general log function (kinda slow, tho...)


OTOH, this is the same "bug" we were discussing before, and where I said that every n-bits number for which we know prime (or PRP) factors of at least a total length of x bits is fully factored, and we should not see any intermediary percents - what you need is neither log nor powers of two, just addition and compare... :razz:

James Heinrich 2019-07-24 14:26

[QUOTE=LaurV;522193]what you need is neither log nor powers of two, just addition and compare... :razz:[/QUOTE]Perhaps explain this again to me as to how I might come up with exactly 100%. Let's use [url=https://www.mersenne.ca/exponent/9901]M9901[/url] as a small example, with known factors of[code]87770464009
4512717821471308759
8336998551279784091551
1017688752041649660766793
25146117302614435382787771401
1502440689076527620360606617623599[/code]and a [url=https://www.mersenne.ca/prp.php?cofactor=9901]P2844[/url] cofactor. What is your proposed approach?

hansl 2019-07-24 17:19

I don't know exactly what LaurV is intending either, but I was thinking more about my previous suggestion and realized it doesn't require a loop at least.

So, again, assuming you have the numbers in the form of decimal digits in a VARCHAR (i don't think MySQL has any other way to store big numbers?).
I don't have MySQL installed to test, and haven't done SQL stuff in many years, but I think something like this should work:
[code]
LOG2( CAST(LEFT(bigDecimalStr, 100) as DOUBLE)) + (GREATEST(0, CHAR_LENGTH(bigDecimalStr)-100) * 3.3219280948873623478703194294893901759)
[/code]

The 3.321.... constant is just log(10)/log(2), to convert remaining length of string(already log10 basically) to log2. This should work because of the property that log(X*Y) = log(X) + log(Y).

I chose somewhat arbitrarily to use the 100 most significant decimal digits for the calculation as this should be within range of conversion to double, but also more than enough precision for the log operation (Double precision can hold up to roughly 10^308, so definitely don't do more than 308).

axn 2019-07-24 17:25

[QUOTE=hansl;522218]I chose somewhat arbitrarily to use the 100 most significant decimal digits for the calculation as this should be within range of conversion to double, but also more than enough precision for the log operation (Double precision can hold up to roughly 10^308, so definitely don't do more than 308).[/QUOTE]
Double precision has 16 significant digits of precision, so you don't need more than 20 MSD.

EDIT:- In fact, 17 should be fine.

LaurV 2019-07-24 17:36

What I mean is this. You have the "bit size" for every factor already. Assume you add them all, but you do not end up with (in our case) 9901 bits, but with just 9890 bits. What would that mean? And why it is not possible? And why, therefore, you can decide in this case that what you got it 100%?

Calculus you do is like (ignore the last approximation :razz:):

[CODE]gp:> ff=[87770464009, 4512717821471308759, 8336998551279784091551, 1017688752041649660766793, 25146117302614435382787771401, 1502440689076527620360606617623599]
%1 = [87770464009, 4512717821471308759, 8336998551279784091551, 1017688752041649660766793, 25146117302614435382787771401, 1502440689076527620360606617623599]
gp:> cf=(1<<9901-1)/ff[1]/ff[2]/ff[3]/ff[4]/ff[5]/ff[6];
gp:> lcf=log(cf)/log(2)
%2 = 9445.5514350012479763494078998597955086
gp:> s=0; for(i=1,#ff,s+=log(ff[i])); s
%3 = 315.69288871895795636456177702937797902
gp:> s/=log(2)
%4 = 455.44856499875202365059210014020449142
gp:> s+lcf
%5 = 9901.0000000000000000000000000000000000
gp:> (s+lcf)*100/9901
%6 = 99.999999999999999999999999999999999999
gp:>[/CODE]Now, you have all the other log numbers (i.e. bit size of all factors), except the last one, which you say you can not compute directly due to pgp limitations or whatever. My question was just why you need it so accurate? And why an error of 10 or 20 or even more bits won't matter? (and will still allow you to round up and say that you have a 100% score). Ending up with few bits less would just mean that a factor of that size is missing, but you know that can't be the case, as all such small factors are either known, or not possible (the factor must be larger than 6p in this particular case, which is 16 bits). So in that case, you can conclude that the error is due to approximation, and in this respect, ANY inaccurate solution is good, including strlen of the string as you already tried.

James Heinrich 2019-07-24 18:29

[QUOTE=LaurV;522224]You have the "bit size" for every factor already
...And why, therefore, you can decide in this case that what you got it 100%?[/QUOTE]I only have the exact bitsize for the small factors (<1kbit), hence this discussion.
And you'll note that the almost-but-not-quite 99.9999+ displays are still highlighted in green, showing that they're fully factored. I could of course just hardcode it that if the known percentage >99.99 then just show a fixed "100.0000000000%" but for now I'm trying to do it "right" rather than "cheating".

chris2be8 2019-07-25 15:40

All factors are of the form 2kp+1 where p is the exponent of the Mersenne number. So if the sum of bit lengths of known factors is within bit length(2p+1) of p then it must be fully factored. Converting the number of decimal digits of any large cofactor to estimated bit length and adding known bit lengths should get you close enough. If not taking the base 2 log of the high 4 digits of the large cofactor and converting the number of digits left to bits should get closer.

Chris

hansl 2019-07-25 16:05

The size of the remaining cofactor should only be included in the tally of percentage if it is certified or probable prime. In which case, this is 100% by definition.


All times are UTC. The time now is 05:16.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.