![]() |
Mersenne "like" processing
Sorry for the lack of background knowledge... I have few questions that I would really appreciate some help.
A - Is a prime in the form [B]2^n-c | c > 1 and 1 = c mod 2[/B] verifiable through same methods that Mersenne numbers are? B - Can I speed up PRP verification of the formula above with programs like Cudalucas for L.Lemer Tests or Equivalent? (maybe not Cudalucas one time that I can't define constants c != 1) C - Are PRP tests processing time in general linear to bitlength? I got average time oscilating from 500ms to 300ms per bit (LLR) but if this really goes linear, i won't be able to finish the job before several months. |
On "Can I speed up [B]PRP[/B] verification" and "Are [B]PRP[/B] tests processing time" I meant to say [B]Probable Prime[/B].
|
[QUOTE=ricardofdg;374521]A - Is a prime in the form [B]2^n-c | c > 1 and 1 = c mod 2[/B] verifiable through same methods that Mersenne numbers are?[/QUOTE]
Only Mersenne numbers can be tested by the LL test. However, a PRP test that's pretty similar on the surface (i.e. run some iterations, each taking about as long as each other, with the number of iterations about the bit length of the number you're testing; I think the algorithm in question is [URL="http://en.wikipedia.org/wiki/Miller-Rabin_primality_test"]Miller-Rabin[/URL]) can be run on numbers like this by various programs (LLR, PFGW, and Prime95 could all do it IIRC). Note that the result of this (for a prime number) will only be that it's [I]probably[/I] prime (or, for a composite, that it's certainly composite). A certain verification will usually take something like ECPP. Based on your estimate of months for a PRP test, (and assuming no special form, like a known factorization of N-1 or N+1, where N is the number you're testing) there's no way to prove such a number prime right now. [QUOTE=ricardofdg;374521]B - Can I speed up PRP verification of the formula above with programs like Cudalucas for L.Lemer Tests or Equivalent? (maybe not Cudalucas one time that I can't define constants c != 1)[/QUOTE] There is a version of LLR for CUDA. You can get it in the packages at [url]http://www.primegrid.com/forum_thread.php?id=1215[/url] If you have a fast CPU, however, it'd probably take even longer to do it on your GPU. (GPUs excel at TFing; not so much at LL/LLR) [QUOTE=ricardofdg;374521]C - Are PRP tests processing time in general linear to bitlength? I got average time oscilating from 500ms to 300ms per bit (LLR) but if this really goes linear, i won't be able to finish the job before several months.[/QUOTE] It is similar to Mersenne numbers: if you double the n, you roughly double the time per iteration and the number of iterations, so it's proportional to bitlength^2 (quadratic), not bitlength^1 (linear). The time per iterations is constant for the duration of your test. E.g. if you need to do 30 million iterations, it will take around (30 million * 300 ms =) 15 weeks. |
[QUOTE=Mini-Geek;374528]
It is similar to Mersenne numbers: if you double the n, you roughly double the time per iteration [/QUOTE] This is false, in general, although it may be true for some specific packages and some specific numbers. To achieve roughly linear time per iteration, the modular multiplications must be done by convolution methods that automatically do the modular reduction (such as DWT), or which allow the reduction to be done in linear time. (which is true for 2^n +/- c for small c). Generic PRP tests run in time O(log^3 N) where N is the number being tested |
[QUOTE=R.D. Silverman;374577]This is false, in general, although it may be true for some specific packages
and some specific numbers. To achieve roughly linear time per iteration, the modular multiplications must be done by convolution methods that automatically do the modular reduction (such as DWT), or which allow the reduction to be done in linear time. (which is true for 2^n +/- c for small c). Generic PRP tests run in time O(log^3 N) where N is the number being tested[/QUOTE] One can use the same convolution-based-modmul approach for generic moduli - the difference lies in whether one must zero-pad the inputs (==> 2x the transform length) and whether the modular reduction can done be done is some convenient fashion, or is "hard". Even in the latter worst case, one precomputes the inverse of the (fixed) modulus and thereby turns divide into another convo-based multiply. Thus the asymptotics are the same, but the constant of proportionality can vary by roughly a factor of 10 across the hardness-of-modmul spectrum. (DWT-based implicit-mod at the low end, divide-via-inverse-mul-based mod at the high.) |
[QUOTE=ewmayer;374631]One can use the same convolution-based-modmul approach for generic moduli - the difference lies in whether one must zero-pad the inputs (==> 2x the transform length) and whether the modular reduction can done be done is some convenient fashion, or is "hard". Even in the latter worst case, one precomputes the inverse of the (fixed) modulus and thereby turns divide into another convo-based multiply. Thus the asymptotics are the same, but the constant of proportionality can vary by roughly a factor of 10 across the hardness-of-modmul spectrum. (DWT-based implicit-mod at the low end, divide-via-inverse-mul-based mod at the high.)[/QUOTE]
Yes. In fact, I implemented a 'division by convolution' method when I wrote "Parallel Polynomial Arithmetic over Finite Rings" back in the 90's. Aho, Hopcroft & Ullman's book on Algorithms gives a terrific exposition. [A terrific book in general; I learned a lot from it] However, it is horribly slow when compared to multiplication. AT least my implementation was. [perhaps a lousy implementation??] |
For N-bit operands, N-bit general modular multiplication needs about three N*N->2N-bit multiplies if a generalized reciprocal is already computed; see the functions ap_recip and ap_mul at the bottom of [url="http://sourceforge.net/p/msieve/code/23/tree/trunk/common/ap.c"]here[/url]. This was used with FHT multiplies in Msieve's NFS square root until I switched to using GMP.
|
[QUOTE=jasonp;375290]For N-bit operands, N-bit general modular multiplication needs about three N*N->2N-bit multiplies if a generalized reciprocal is already computed; see the functions ap_recip and ap_mul at the bottom of [url="http://sourceforge.net/p/msieve/code/23/tree/trunk/common/ap.c"]here[/url]. This was used with FHT multiplies in Msieve's NFS square root until I switched to using GMP.[/QUOTE]
This is fine in applications where the modulus is fixed. Computing N mod p_i for i=0,1,2,...... requires computing a new inverse each and every time. |
| All times are UTC. The time now is 21:56. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.