 mersenneforum.org Faster way to do LLT?
 Register FAQ Search Today's Posts Mark Forums Read  2003-10-02, 16:27   #12

"Richard B. Woods"
Aug 2002
Wisconsin USA

22×3×641 Posts Quote:
 Originally posted by only_human I believe there is a way to leverage some of the effort of computing a LL sequence for two different numbers under test: (snip) Testing both 2^5-1 and 2^7-1, the LL sequence is: S(0)=4 S(1)=4^2 - 2 mod (31*127) = 14 S(2)=14^2-2 mod (31*127) = 194 S(3)=194^2-2 mod (31*127) = 2201 at this step, 2201 mod 31=0 therefore 2^5-1 is prime continue the sequence using 2201 mod 127=42 S(4)=42^2-2 mod 127= 111 S(5)=111^2-2 mod 127=0 ; therefore 2^7-1 is prime
But, once again, because in practice we are no longer working in the realm of small numbers, we need to consider whether the execution speeds and storage requirements of a proposed improvement are really any better than what we're already doing.

Let m and n be the exponents of the two Mersenne numbers we're doing in parallel. For simplified analysis, assume that m is less than, but close to, n (m < n << m log m), so that the execution time and space for 2^n-1 are only slightly more than those for 2^m-1.

As you've already pointed out, the modulo (2^m-1)*(2^n-1) will not be as efficiently calculated as a modulo (2^k-1) would be, unless someone comes up with an algorithmic improvement.

Even if the algorithmic efficency were the same: as you also pointed out, each modulo (2^m-1)*(2^n-1) calculation would involve (m+n)-bit numbers, which are larger than (2m)-bit numbers. Doubling the size of the numbers would already more than double the computation time (there's a log m factor), and this is even slightly worse.

So each step of modulo (2^m-1)*(2^n-1) arithmetic takes more time (and more space) than the time (and space) required for the combination of separate mod 2^m-1 and mod 2^n-1 steps. This applies only up to the S(m-1) step, but that is almost all the way to S(n-1) anyway.

There's just no savings to be had.   2003-10-07, 20:32 #13 only_human   "Gang aft agley" Sep 2002 2·1,877 Posts Other ideas I have been dwelling on in L-L testing seem equally fruitless; still one can hope. Since the work involved in L-L testing seems to not have any shortcuts, the only other options are to extract more meaning from the work done or abort the sequence sooner if possible. I believe that if any value repeats in a L-L sequence, the generator is in a cycle and don't think it is necessary to continue any further. Maybe that there could be a chance to get somewhere by trying to factor the number using the L-L sequence values as data for Pollard's rho method. Looking at Knuth's 4.5.4, I see that a pseudorandom number generator of f(x)=(x2 + c) Mod N is specifically deprecated when c = -2 (as it is in the L-L sequence) because of recurrence relationships that exist. All the same I wonder if that could be worked around. Maybe by taking the data bits from the FFT before they are unscrambled into the correct order. Since the rho method depends on the GCD algorithm, it seems like an expensive operation to throw into a finely tuned routine. I wish my math ability was at least at the level of a math crank. At least I am muttering to myself like one at the moment.    2003-10-16, 22:17   #14
only_human

"Gang aft agley"
Sep 2002

2·1,877 Posts Quote:
 Originally posted by only_human Maybe that there could be a chance to get somewhere by trying to factor the number using the L-L sequence values as data for Pollard's rho method.
I just came across this other thread that has ends in an analysis of the possibility by apocalypse. Factors in residue of LL test
I include it here because it answers my implied question and finishes my speculation in that direction nicely. I should have read the forum more!   2005-09-02, 22:55   #15
jasong

"Jason Goatcher"
Mar 2005

5·701 Posts Quote:
 Originally Posted by cheesehead The estimated total number of particles (electrons, proton, neutrons, muons, ...) in the known universe is less than 10^100. 10^100 is less than 2^400. If we could use every particle in the universe to store one binary bit, the largest number we could store would be less than 2^(2^400).
I'm not saying that this matters, but while 2^400 is a larger number then the number of atoms in the universe, it would only take about 400 bits to express it(401 is the actual number of bits I think).

Maybe I'll get flamed for this, but has anyone ever been ACTUALLY KNOWN to to do the arithmetic on the actual numbers, using the hard drive? There is a small possibility that we're dealing with an,"Everybody knows the world is flat, so why bother?" type of thing.   2005-09-02, 23:19   #16
akruppa

"Nancy"
Aug 2002
Alexandria

2,467 Posts Quote:
 Originally Posted by jasong There is a small possibility that we're dealing with an,"Everybody knows the world is flat, so why bother?" type of thing.
Most certainly not. The reasons why this cannot work are well understood.

Exercise: Compute the first few terms of the sequence x_0 = 4, x_{n+1} = x_n^2 - 2, i.e. 4, 14, 194, ...

(a) Look at what happens to the number of digits in successive terms as you go along.

(b) Estimate the number of digits for the x_29, x_59 and x_87 terms, and how much the hard disk space required would cost. Estimate how long the final division would take, assuming it can process one million digits in the dividend per second.

(c) Try to estimate the number of digits in the x_25964949 term.

Note: the x_29, x_59 and x_87 terms are what are needed to discover the primes M31, M61 and M89, respectively.

Alex   2005-09-02, 23:41   #17
Phil MjX

Sep 2004

18510 Posts Quote:
 Originally Posted by cheesehead If we could use every particle in the universe to store one binary bit, the largest number we could store would be less than 2^(2^400). So S(1000) > 2^(2^999 + 1), which is already way, way above the "universal" storage limit. And S(1000) isn't very far along the path toward S(several-million). It's not a matter of using more hard disk space. :)
Jasong,

I think you read 2^400, where you should have seen 2^(2^400), taking 2^400 bits on your "hard drive", not 400...

Philippe.   2005-09-02, 23:42   #18
trilliwig

Oct 2004
tropical Massachusetts

3·23 Posts Quote:
 Originally Posted by jasong I'm not saying that this matters, but while 2^400 is a larger number then the number of atoms in the universe, it would only take about 400 bits to express it(401 is the actual number of bits I think).
Heh, yes. And oh by the way, it has no relevance to the problem at hand. In other words, it doesn't matter.

You're saying the number of bits in the number of bits of S(n) is a manageable number. Which has no bearing when it's S(n) itself that you're proposing we try to handle.

The number of digits in the number of digits of one googolplex is 101. So of course one googolplex is a nice manageable number, right??   2005-09-03, 01:59   #19
Orgasmic Troll
Cranksta Rap Ayatollah

Jul 2003

641 Posts Quote:
 Originally Posted by trilliwig Heh, yes. And oh by the way, it has no relevance to the problem at hand. In other words, it doesn't matter. You're saying the number of bits in the number of bits of S(n) is a manageable number. Which has no bearing when it's S(n) itself that you're proposing we try to handle. The number of digits in the number of digits of one googolplex is 101. So of course one googolplex is a nice manageable number, right??
at least it's easy to factor :)   2005-09-03, 16:22   #20

"Richard B. Woods"
Aug 2002
Wisconsin USA

1E0C16 Posts Quote:
 Originally Posted by trilliwig You're saying the number of bits in the number of bits of S(n) is a manageable number. Which has no bearing when it's S(n) itself that you're proposing we try to handle.
Actually, it does have a bearing. A 400-bit number is quite easy to handle. (Prime95 works with 30-million-bit numbers all the time.)

The real problem, as already pointed out by Philippe, is that jasong makes the mistake of thinking that 2^(2^400) has only 400 bits. 2^(2^400) actually has 2^400 bits. Since 2^400 > 10^100, the number 2^(2^400) actually has over a googol bits.

- - -

Jasong,

Storing the explicit value of a number the size of 2^(2^400) requires over (well over) ten million billion trillion trillion trillion trillion trillion trillion trillion bits.

Last fiddled with by cheesehead on 2005-09-03 at 16:33   2005-09-04, 00:26   #21
trilliwig

Oct 2004
tropical Massachusetts

3×23 Posts Quote:
 Originally Posted by cheesehead Actually, it does have a bearing. A 400-bit number is quite easy to handle. (Prime95 works with 30-million-bit numbers all the time.)
Erm, I thought I made it clear that it has no bearing, because 400 is not the number of bits, but the number of bits in the number of bits. At least, that's what I spent 3 paragraphs attempting to make clear.

It's double-plus ungood using log log N in a place where log N is what is required.

Last fiddled with by trilliwig on 2005-09-04 at 00:31   2005-09-04, 01:48   #22

"Richard B. Woods"
Aug 2002
Wisconsin USA

22×3×641 Posts Quote:
 Originally Posted by trilliwig Erm, I thought I made it clear that it has no bearing, because 400 is not the number of bits, but the number of bits in the number of bits.
Yes, what you wrote is okay. My posted response got disorganized; it should have put some sentences in the portion addressed to Jasong instead of being addressed to you. My apologies.

Last fiddled with by cheesehead on 2005-09-04 at 01:49   Thread Tools Show Printable Version Email this Page Similar Threads Thread Thread Starter Forum Replies Last Post paulunderwood Miscellaneous Math 13 2016-08-02 00:05 arbiter21 Information & Answers 17 2016-02-05 05:04 lidocorc Software 2 2008-11-08 09:26 bearnol Math 35 2005-10-12 14:33 clowns789 Miscellaneous Math 3 2004-05-27 23:39

All times are UTC. The time now is 15:52.

Mon Sep 28 15:52:08 UTC 2020 up 18 days, 13:03, 1 user, load averages: 1.93, 1.80, 1.96