mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Math

Reply
 
Thread Tools
Old 2006-07-20, 09:27   #1
Steve Norrie
 
Jul 2006

2 Posts
Default Expected Running Times

Hi, I was just wondering if anyone could point me in the direction of something detailing the derivation of the expected run times of the Lucas-Lehmer test (both with and without FFT speed-ups), and hopefully the same for the Miller-Rabin probabilistic test. Thanks for your time.
Steve Norrie is offline   Reply With Quote
Old 2006-07-20, 11:47   #2
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

11101001001002 Posts
Thumbs up

Quote:
Originally Posted by Steve Norrie
Hi, I was just wondering if anyone could point me in the direction of something detailing the derivation of the expected run times of the Lucas-Lehmer test (both with and without FFT speed-ups), and hopefully the same for the Miller-Rabin probabilistic test. Thanks for your time.
I will offer some help, but please show what you have done so far.

I will offer a basic hint:

(1) Count the number of modular multiplications
(2) Determine the time needed to do each multiplication
(3) Multiply these two results

BTW, the expression "expected run-times" is a misnomer. The run-times
are not random. They are deterministic. "expected run time" generally
applies to random algorithms.
R.D. Silverman is offline   Reply With Quote
Old 2006-08-15, 16:45   #3
Steve Norrie
 
Jul 2006

2 Posts
Default

Hi, and thanks for your offer of help. Sorry for the length of time it's taken for me to actually get back to this, but i've been swamped with many other things (and a nice holiday as well).

Anyway, what i know of running times is fairly limited, and i'll go through what i *think* I should be doing, if nobody laughs at any glaring errors I will certainly come up with.

For Miller-Rabin, we first raise a base to power t (if n=2^st), and then square that s-1 times mod n(repeating the whole thing as necessary for different bases, but I'm just interested in how long it takes for one particular base). So that's (s-1)t, or about log(n) multiplications (?)

For lucas-lehmer, I'm counting p-2 squarings mod M_p. And as M_p=2^p-1, P-2 is around log(n)

As for how long it takes to do each modular multiplication, thats where I fall down. If memory serves (and i severly doubt it does), calculating a^2 mod m takes O(log^2 m). And there is a very good chance I'm just making that up. But if that is indeed the case, M-R takes O(log^3 n), and L-L would take around the same.

But my intuition is that that looks very wrong. As I say, I'm fairly new to this running time lark and I don't have many people to ask. Any further pointers would be of great assistance

(and FFT goes a bit beyond what I want to look at, but I'd still like to just be able to say 'it makes things x times quicker' or whatever, so any brief information you could help me with on that regard would also be fantastic)
Steve Norrie is offline   Reply With Quote
Old 2006-08-15, 18:36   #4
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

19×613 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
BTW, the expression "expected run-times" is a misnomer. The run-times are not random. They are deterministic.
I would add one caveat to this - in terms of a computer implementation of an algorithm like FFT-based multiply, one does in fact commonly use expressions like "expected run time," and these need not be misnomers, one simply needs to be clear what one is talking about. This is because while the *operation count* is indeed deterministic, how the resulting runtime scales with opcount is often a fuzzier proposition - even if deterministic, it may depend on details of the hardware architecture and operating system which are difficult or impossible to characterize in an a priori fashion.

Case in point: FFT opcounts are well-characterized - for a given input vector length N, the arithmetic opcount scales as O(N * log N), and the precise scaling factor can be derived from examining the actual implementation in code. But even here things quickly get fuzzy: while it's starightforward to count the adds and multiplies, what about the attendant loads and stores? OK, we could treat each array-element read as causing a load and each write as a store, but that's not what usually happens in practice, especially once the code has been run through an optimizing compiler and mapped to a given architecture. Now one has register spills and cache pages to service, cache misses which turn a simple load-from-cache into an expensive main-memory access, and the details of all this behavior will vary with FFT size - one may get a nice O(N * log N) scaling as long as the data all fit into the L2 cache, then see a gradual (or sudden) crossover to a different scaling factor (or even something that looks more like O(N1 + epsilon)) once the memory footprint exceeds the cache size. The one may have effects such as granularity in the FFT lengths available - does one have just powers of two, or does one break those into smaller subintervals via odd leading FFT radices (in the latter case one then has the fact that odd radices are less efficient in terms of relative opcount), the gradual reduction in allowable input size with increasing FFT length, et cetera.

It thus makes perfect sense to speak of "expected runtime", either by way of "this is what we expect based strictly on arithmetic opcount, and this is the trend we actually see," or perhaps "based on our timing tests of FFT lengths up to 128 Mdoubles, this is our per-iteration runtime estimate for a 1 Gdouble FFT, which we can compare to what we observe once we have a machine with enough memory to try it on."
ewmayer is offline   Reply With Quote
Old 2006-08-15, 19:44   #5
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

11101001001002 Posts
Default

Quote:
Originally Posted by Steve Norrie View Post
Hi, and thanks for your offer of help. Sorry for the length of time it's taken for me to actually get back to this, but i've been swamped with many other things (and a nice holiday as well).

Anyway, what i know of running times is fairly limited, and i'll go through what i *think* I should be doing, if nobody laughs at any glaring errors I will certainly come up with.

For Miller-Rabin, we first raise a base to power t (if n=2^st), and then square that s-1 times mod n(repeating the whole thing as necessary for different bases, but I'm just interested in how long it takes for one particular base). So that's (s-1)t, or about log(n) multiplications (?)

For lucas-lehmer, I'm counting p-2 squarings mod M_p. And as M_p=2^p-1, P-2 is around log(n)

As for how long it takes to do each modular multiplication, thats where I fall down. If memory serves (and i severly doubt it does), calculating a^2 mod m takes O(log^2 m). And there is a very good chance I'm just making that up. But if that is indeed the case, M-R takes O(log^3 n), and L-L would take around the same.

But my intuition is that that looks very wrong. As I say, I'm fairly new to this running time lark and I don't have many people to ask. Any further pointers would be of great assistance

(and FFT goes a bit beyond what I want to look at, but I'd still like to just be able to say 'it makes things x times quicker' or whatever, so any brief information you could help me with on that regard would also be fantastic)
You are essentially correct. Both MR and LL take O(log n) multiplications.
Using naiive multiplication (i.e. pencil & paper algorithm), multiplication
takes O(log^2 n) for a total of O(log^3 n).

However an FFT based multiplication takes O(log n loglog n logloglogn)
instead of O(log^2 n). Thus, it is faster by O(log n/(loglog n logloglog n)).
R.D. Silverman is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
What does 'expected completion date' even mean? fivemack PrimeNet 7 2015-10-15 22:59
Expected completion dates for LL wildrabbitt PrimeNet 3 2015-08-17 10:57
Running other programs while running Prime95. Neimanator PrimeNet 14 2013-08-10 20:15
I get 13% less primes than I expected:-( mart_r Math 2 2010-10-29 17:31
Expected Completion Date Orgasmic Troll Software 4 2003-07-19 02:08

All times are UTC. The time now is 15:10.


Mon Aug 2 15:10:02 UTC 2021 up 10 days, 9:39, 0 users, load averages: 3.61, 3.15, 3.23

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.