![]() |
Complexity of LLT
Hello,
I know this has already been said in this forum (Mr Silverman I think), but I cannot find it again. What is the overall complexity of the LLT as implemented by George, with the IBDWT method (the book of Crandall and Pomerance does not say a world about this). I'm lost with the 2 following information: [URL="http://en.wikipedia.org/wiki/Lucas%E2%80%93Lehmer_test_for_Mersenne_numbers"]Wikipedia [/URL]: The most efficient known multiplication method, the Schönhage-Strassen algorithm based on the Fast Fourier transform, requires O(p log p log log p) time to square a p-bit number, reducing the complexity to O(p2 log p log log p) or Õ(p2). [URL="http://www.mersennewiki.org/index.php/Lucas-Lehmer_Test#Complexity"]Mersenne Wiki[/URL] : When used with the Fast Fourier transform, the test has the complexity of O(n2 log n), where n is the length of the number. Thanks, Tony |
[QUOTE=T.Rex;107119]Hello,
I know this has already been said in this forum (Mr Silverman I think), but I cannot find it again. What is the overall complexity of the LLT as implemented by George, with the IBDWT method (the book of Crandall and Pomerance does not say a world about this). I'm lost with the 2 following information: [URL="http://en.wikipedia.org/wiki/Lucas%E2%80%93Lehmer_test_for_Mersenne_numbers"]Wikipedia [/URL]: The most efficient known multiplication method, the Schönhage-Strassen algorithm based on the Fast Fourier transform, requires O(p log p log log p) time to square a p-bit number, reducing the complexity to O(p2 log p log log p) or Õ(p2). [URL="http://www.mersennewiki.org/index.php/Lucas-Lehmer_Test#Complexity"]Mersenne Wiki[/URL] : When used with the Fast Fourier transform, the test has the complexity of O(n2 log n), where n is the length of the number. Thanks, Tony[/QUOTE] From analyzing the benchmark table, my money's on n^2 log(n) Also, I don't believe the DWT represents an improvement in complexity over a typical FFT, it only reduces the proportionality constant. Drew |
FFT multiplication of n-bit numbers is O(n*log n*log log n).
The log log n term grows very slowly and realistic register and operand sizes make it disappear in practice in most implementations, so the time is often but inaccurately simplified to O(n*log n). A prp or fast proof test gets another factor n because it makes O(n) multiplications. Optimizations of special forms like with IBDWT give a constant factor in implementations, but the asymptotic time is unchanged. |
[QUOTE=Jens K Andersen;107184]FFT multiplication of n-bit numbers is O(n*log n*log log n).
The log log n term grows very slowly and realistic register and operand sizes make it disappear in practice in most implementations, so the time is often but inaccurately simplified to O(n*log n). A prp or fast proof test gets another factor n because it makes O(n) multiplications. Optimizations of special forms like with IBDWT give a constant factor in implementations, but the asymptotic time is unchanged.[/QUOTE]Thanks Jens. Let me add parenthesis to clarify. So, for a 2^n-1 Mersenne number, executing the full LLT (n-2 steps), is O( n * n* log(n) * log(log(n)) ) in theory, which simplifies in O( n^2 * log(n) ). Though the simple multiplication we learn at school would lead to O(n^3) for a whole LLT. Correct ? So it seems that Drew was correct and that our Mersenne Wiki is too vague by saying "the length of the number". Thanks, Tony |
[QUOTE=T.Rex;107187]Thanks Jens.
Let me add parenthesis to clarify. So, for a 2^n-1 Mersenne number, executing the full LLT (n-2 steps), is O( n * n* log(n) * log(log(n)) ) in theory, which simplifies in O( n^2 * log(n) ). Though the simple multiplication we learn at school would lead to O(n^3) for a whole LLT. Correct ? So it seems that Drew was correct and that our Mersenne Wiki is too vague by saying "the length of the number". Thanks, Tony[/QUOTE] I suspect the log(log(n)) will become more significant as n grows. Perhaps it's a fair simplification for now, but not always. Edit: I stand corrected, I just graphed the function, and it looks like the rate of growth of log(log(n)) decreases substantially as n grows. |
[QUOTE=T.Rex;107187]So, for a 2^n-1 Mersenne number, executing the full LLT (n-2 steps), is O( n * n* log(n) * log(log(n)) ) in theory, which simplifies in O( n^2 * log(n) ). Though the simple multiplication we learn at school would lead to O(n^3) for a whole LLT. Correct ?
So it seems that Drew was correct and that our Mersenne Wiki is too vague by saying "the length of the number".[/QUOTE] I agree, except about the last part. "the length of the number" is a common term and sounds OK to me. The big-O complexity O(n^2*log n) is the same whether the length n is measured in bits, decimal digits, or another base. A base change gives a constant factor in length with no influence on big-O. |
[QUOTE=Jens K Andersen;107216]I agree, except about the last part. "the length of the number" is a common term and sounds OK to me. The big-O complexity O(n^2*log n) is the same whether the length n is measured in bits, decimal digits, or another base. A base change gives a constant factor in length with no influence on big-O.[/QUOTE]OK. That's clear now. Thanks to FFT, we have about O(n^2*log n) instead of O(n^3) with scholar method. Bits/digits and FFT/IBDWT are just a matter of a constant factor.
Thanks ! Tony |
[QUOTE=T.Rex;107227]OK. That's clear now. Thanks to FFT, we have about O(n^2*log n) instead of O(n^3) with scholar method. Bits/digits and FFT/IBDWT are just a matter of a constant factor.[/QUOTE]
Careful -- you're mixing up yoiur speedups in the above. FFT gets you an asymptotic n --> log n reduction, assuming you can implement it in manner that actually scales that way on modern cache-based microprcessors. DWT (a.k.a. "implicit mod") halves your runlength, i.e. gets you an asymptotic factor of ~2x. (At finite runlengths of practical interest, the speedup may be appreciably greater, approaching a factor of 3, due to the aforementioned cache effects.) Balanced-digit arithmetic gives a significant improvement in the slowness of the growth of roundoff error. I leave it as a homework exercise as to the (heuristic) asymptotic behavior here. Pure-integer-based transforms are not subject to roundoff error, but still have input wordsize restricted by the magnitude of convolution outputs. In fact, for a careful implementation of balanced-digit arithmetic, this convolution-output-size effect tends to dwarf the roundoff-error-growth effect. In other words, as n gets larger, floating-point (counterintuitively) becomes relatively more attractive, not less. (This assumes that one starts with a floating-point precision sufficiently good to be competitive with pure-integer at modest n -- that typically means double precision). |
[QUOTE=ewmayer;107274]Careful -- you're mixing up your speedups in the above ...[/QUOTE]Since I plan to talk about that in a paper, what should I say about the complexity of LLT with basic multiplication compared to LLT with FFT/IBDWT ?
Is: [COLOR="Red"]"Thanks to FFT, we have about O(n^2*log n) instead of O(n^3) with scholar method[/COLOR]" correct ? And what should I add (very short) about the impact of IBDWT ? Since I do not master that point, I need help to get the correct sentence. Thanks for helping ! Tony |
Hi, Tony:
[QUOTE=T.Rex;107275]Is: [COLOR="Red"]"Thanks to FFT, we have about O(n^2*log n) instead of O(n^3) with scholar method[/COLOR]" correct ?[/QUOTE] Yes, although note that the more-common English euphemism for the quadratic-mul is the "grammar-school multiply" method. (What you might call the "Ecole normal inférieur" method, to make a pun on the French school naming system. ;) I further suggest replacing "about" with the more mathematically precise "asymptotically." [QUOTE] And what should I add (very short) about the impact of IBDWT ?[/QUOTE] Some suitable rendition of what I wrote about DWT above should do the trick, I think. Also note that IBDWT (at least as commonly cited in the literature) refers specifically to a weighting suitable for Mersenne-mod -- that's why when speaking in general terms I prefer the more generic "DWT" or "implicit mod". Cheers, -Ernst |
| All times are UTC. The time now is 01:42. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.