mersenneforum.org  

Go Back   mersenneforum.org > Extra Stuff > Miscellaneous Math

Reply
 
Thread Tools
Old 2003-10-02, 16:27   #12
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

22×3×641 Posts
Default

Quote:
Originally posted by only_human
I believe there is a way to leverage some of the effort of computing a LL sequence for two different numbers under test:


(snip)

Testing both 2^5-1 and 2^7-1, the LL sequence is:
S(0)=4
S(1)=4^2 - 2 mod (31*127) = 14
S(2)=14^2-2 mod (31*127) = 194
S(3)=194^2-2 mod (31*127) = 2201
at this step,
2201 mod 31=0 therefore 2^5-1 is prime
continue the sequence using
2201 mod 127=42
S(4)=42^2-2 mod 127= 111
S(5)=111^2-2 mod 127=0 ; therefore 2^7-1 is prime
But, once again, because in practice we are no longer working in the realm of small numbers, we need to consider whether the execution speeds and storage requirements of a proposed improvement are really any better than what we're already doing.

Let m and n be the exponents of the two Mersenne numbers we're doing in parallel. For simplified analysis, assume that m is less than, but close to, n (m < n << m log m), so that the execution time and space for 2^n-1 are only slightly more than those for 2^m-1.

As you've already pointed out, the modulo (2^m-1)*(2^n-1) will not be as efficiently calculated as a modulo (2^k-1) would be, unless someone comes up with an algorithmic improvement.

Even if the algorithmic efficency were the same: as you also pointed out, each modulo (2^m-1)*(2^n-1) calculation would involve (m+n)-bit numbers, which are larger than (2m)-bit numbers. Doubling the size of the numbers would already more than double the computation time (there's a log m factor), and this is even slightly worse.

So each step of modulo (2^m-1)*(2^n-1) arithmetic takes more time (and more space) than the time (and space) required for the combination of separate mod 2^m-1 and mod 2^n-1 steps. This applies only up to the S(m-1) step, but that is almost all the way to S(n-1) anyway.

There's just no savings to be had.
cheesehead is offline   Reply With Quote
Old 2003-10-07, 20:32   #13
only_human
 
only_human's Avatar
 
"Gang aft agley"
Sep 2002

2·1,877 Posts
Default

Other ideas I have been dwelling on in L-L testing seem equally fruitless; still one can hope.

Since the work involved in L-L testing seems to not have any shortcuts, the only other options are to extract more meaning from the work done or abort the sequence sooner if possible.

I believe that if any value repeats in a L-L sequence, the generator is in a cycle and don't think it is necessary to continue any further.

Maybe that there could be a chance to get somewhere by trying to factor the number using the L-L sequence values as data for Pollard's rho method.

Looking at Knuth's 4.5.4, I see that a pseudorandom number generator of f(x)=(x2 + c) Mod N is specifically deprecated when c = -2 (as it is in the L-L sequence) because of recurrence relationships that exist. All the same I wonder if that could be worked around. Maybe by taking the data bits from the FFT before they are unscrambled into the correct order. Since the rho method depends on the GCD algorithm, it seems like an expensive operation to throw into a finely tuned routine.

I wish my math ability was at least at the level of a math crank.
At least I am muttering to myself like one at the moment.

only_human is offline   Reply With Quote
Old 2003-10-16, 22:17   #14
only_human
 
only_human's Avatar
 
"Gang aft agley"
Sep 2002

2·1,877 Posts
Default

Quote:
Originally posted by only_human
Maybe that there could be a chance to get somewhere by trying to factor the number using the L-L sequence values as data for Pollard's rho method.
I just came across this other thread that has ends in an analysis of the possibility by apocalypse. Factors in residue of LL test
I include it here because it answers my implied question and finishes my speculation in that direction nicely. I should have read the forum more!
only_human is offline   Reply With Quote
Old 2005-09-02, 22:55   #15
jasong
 
jasong's Avatar
 
"Jason Goatcher"
Mar 2005

5·701 Posts
Default

Quote:
Originally Posted by cheesehead
The estimated total number of particles (electrons, proton, neutrons, muons, ...) in the known universe is less than 10^100. 10^100 is less than 2^400. If we could use every particle in the universe to store one binary bit, the largest number we could store would be less than 2^(2^400).
I'm not saying that this matters, but while 2^400 is a larger number then the number of atoms in the universe, it would only take about 400 bits to express it(401 is the actual number of bits I think).

Maybe I'll get flamed for this, but has anyone ever been ACTUALLY KNOWN to to do the arithmetic on the actual numbers, using the hard drive? There is a small possibility that we're dealing with an,"Everybody knows the world is flat, so why bother?" type of thing.
jasong is offline   Reply With Quote
Old 2005-09-02, 23:19   #16
akruppa
 
akruppa's Avatar
 
"Nancy"
Aug 2002
Alexandria

2,467 Posts
Default

Quote:
Originally Posted by jasong
There is a small possibility that we're dealing with an,"Everybody knows the world is flat, so why bother?" type of thing.
Most certainly not. The reasons why this cannot work are well understood.

Exercise: Compute the first few terms of the sequence x_0 = 4, x_{n+1} = x_n^2 - 2, i.e. 4, 14, 194, ...

(a) Look at what happens to the number of digits in successive terms as you go along.

(b) Estimate the number of digits for the x_29, x_59 and x_87 terms, and how much the hard disk space required would cost. Estimate how long the final division would take, assuming it can process one million digits in the dividend per second.

(c) Try to estimate the number of digits in the x_25964949 term.

Note: the x_29, x_59 and x_87 terms are what are needed to discover the primes M31, M61 and M89, respectively.

Alex
akruppa is offline   Reply With Quote
Old 2005-09-02, 23:41   #17
Phil MjX
 
Phil MjX's Avatar
 
Sep 2004

18510 Posts
Default

Quote:
Originally Posted by cheesehead
If we could use every particle in the universe to store one binary bit, the largest number we could store would be less than 2^(2^400).


So S(1000) > 2^(2^999 + 1), which is already way, way above the "universal" storage limit.

And S(1000) isn't very far along the path toward S(several-million).

It's not a matter of using more hard disk space. :)
Jasong,

I think you read 2^400, where you should have seen 2^(2^400), taking 2^400 bits on your "hard drive", not 400...

Philippe.
Phil MjX is offline   Reply With Quote
Old 2005-09-02, 23:42   #18
trilliwig
 
trilliwig's Avatar
 
Oct 2004
tropical Massachusetts

3·23 Posts
Default

Quote:
Originally Posted by jasong
I'm not saying that this matters, but while 2^400 is a larger number then the number of atoms in the universe, it would only take about 400 bits to express it(401 is the actual number of bits I think).
Heh, yes. And oh by the way, it has no relevance to the problem at hand. In other words, it doesn't matter.

You're saying the number of bits in the number of bits of S(n) is a manageable number. Which has no bearing when it's S(n) itself that you're proposing we try to handle.

The number of digits in the number of digits of one googolplex is 101. So of course one googolplex is a nice manageable number, right??
trilliwig is offline   Reply With Quote
Old 2005-09-03, 01:59   #19
Orgasmic Troll
Cranksta Rap Ayatollah
 
Orgasmic Troll's Avatar
 
Jul 2003

641 Posts
Default

Quote:
Originally Posted by trilliwig
Heh, yes. And oh by the way, it has no relevance to the problem at hand. In other words, it doesn't matter.

You're saying the number of bits in the number of bits of S(n) is a manageable number. Which has no bearing when it's S(n) itself that you're proposing we try to handle.

The number of digits in the number of digits of one googolplex is 101. So of course one googolplex is a nice manageable number, right??
at least it's easy to factor :)
Orgasmic Troll is offline   Reply With Quote
Old 2005-09-03, 16:22   #20
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

1E0C16 Posts
Default

Quote:
Originally Posted by trilliwig
You're saying the number of bits in the number of bits of S(n) is a manageable number. Which has no bearing when it's S(n) itself that you're proposing we try to handle.
Actually, it does have a bearing. A 400-bit number is quite easy to handle. (Prime95 works with 30-million-bit numbers all the time.)

The real problem, as already pointed out by Philippe, is that jasong makes the mistake of thinking that 2^(2^400) has only 400 bits. 2^(2^400) actually has 2^400 bits. Since 2^400 > 10^100, the number 2^(2^400) actually has over a googol bits.

- - -

Jasong,

Storing the explicit value of a number the size of 2^(2^400) requires over (well over) ten million billion trillion trillion trillion trillion trillion trillion trillion bits.

Last fiddled with by cheesehead on 2005-09-03 at 16:33
cheesehead is offline   Reply With Quote
Old 2005-09-04, 00:26   #21
trilliwig
 
trilliwig's Avatar
 
Oct 2004
tropical Massachusetts

3×23 Posts
Default

Quote:
Originally Posted by cheesehead
Actually, it does have a bearing. A 400-bit number is quite easy to handle. (Prime95 works with 30-million-bit numbers all the time.)
Erm, I thought I made it clear that it has no bearing, because 400 is not the number of bits, but the number of bits in the number of bits. At least, that's what I spent 3 paragraphs attempting to make clear.

It's double-plus ungood using log log N in a place where log N is what is required.

Last fiddled with by trilliwig on 2005-09-04 at 00:31
trilliwig is offline   Reply With Quote
Old 2005-09-04, 01:48   #22
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

22×3×641 Posts
Default

Quote:
Originally Posted by trilliwig
Erm, I thought I made it clear that it has no bearing, because 400 is not the number of bits, but the number of bits in the number of bits.
Yes, what you wrote is okay. My posted response got disorganized; it should have put some sentences in the portion addressed to Jasong instead of being addressed to you. My apologies.

Last fiddled with by cheesehead on 2005-09-04 at 01:49
cheesehead is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
faster than LL? paulunderwood Miscellaneous Math 13 2016-08-02 00:05
Is it faster to run 1 worker or many? arbiter21 Information & Answers 17 2016-02-05 05:04
My CPU is getting faster and faster ;-) lidocorc Software 2 2008-11-08 09:26
3-PRP faster than LL for GIMPS? bearnol Math 35 2005-10-12 14:33
Faster than LL? clowns789 Miscellaneous Math 3 2004-05-27 23:39

All times are UTC. The time now is 15:52.

Mon Sep 28 15:52:08 UTC 2020 up 18 days, 13:03, 1 user, load averages: 1.93, 1.80, 1.96

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.