![]() |
|
|
#1 |
|
Mar 2017
2×3×5 Posts |
What is the most efficient sequence of tests to determine primality of a random number with a large magnitude on the order of 2^512?
Clarifying that question, what are the efficient stages of analysis? I said "random number" so our n is very very likely composite.. we can estimate it's chances of being prime as only 1 in log(n) ~= 1 in 354. So it's not efficient to suddenly jump to a very computationally expensive AKS test. Logically, we should start with some trial divisions! Check 2, 3, 5, 7, 11, etc, up to some limit we can empirically determine is best for our bigInt math speed.. it might be something like 50 or 100 tests. We'll likely determine a factor after these divisions (remember it's a random number, not something like an RSA modulus, so a small factor is likely!). If the trial divisions fail to find any small factors, we need to move to stronger tests. The next step seems be a Miller Rabin test. This will give us VERY high probability of successful detection (Miller Rabin becomes more and more correct for higher n, and 2^512 makes MR nearly perfect). And if Miller Rabin passes, maybe verify with a second round of Miller Rabin, then (slow) AKS to prove it. Is this strategy the most time-efficient way to test a single number? There is such a huge computational gulf between fast trial divisions and slow Miller-Rabin, which needs approximately 512 bigint modular multiplies. A Fermat test requires just as much computation as MR, but is less powerful. Is there some other test inbetween trial factoring and MR in computational cost for screening out (deterministically or probabilistically) candidates? After googling, and reading Pomerance's prime book, it doesn't really seem so. But that's why I ask you guys, my fellow computational number theory fanatics! |
|
|
|
|
|
#2 | |
|
∂2ω=0
Sep 2002
República de California
101101011111112 Posts |
512 bits is miniscule by the standard of modern general-prime testing. Sure, go ahead and do some small-primes divisibility tests, but I'd probably limit that to (say) primes < 1000, then if that fails to turn up a factor, do a quick base-2 or 3 Fermat-PRP test. If the latter indicates 'possibly prime', run ECPP.
There is never a good reason to use AKS as a practical primality test - as Wikipedia notes (italics theirs, not mine: Quote:
Again, this is only if you care about the trivial CPU-cycle savings for such a small input. If you are more interested in simply determining primality in the minimal elapsed-time for yourself, just feed your number to e.g. PARI/GP's isprime function. I tried the latter on a couple hundred 512-bit odd-number inputs - more or less instantaneous when the input is composite, runtime on order of 1 second for a prime input. Last fiddled with by ewmayer on 2017-03-25 at 05:40 |
|
|
|
|
|
|
#3 |
|
Mar 2017
2×3×5 Posts |
Thanks, ewmayer!
I'm not really concerned with the speed of the final "for sure" test, be it ECPP or AKS, since the vast majority of time is spent on the initial filtering tests. You're right though that I should use ECPP, so thanks very much for pointing that out! Evaluation speed does concern me, since my application is working through hundreds of millions of these 2^512 bit values (all independent, so no sieving/sorting possible) and I am identifying the primes. It's an experiment I'm playing with for making a one-way trapdoor function (the details are irrelevant to this prime testing question though) So my question still stands.. are there any effective intermediate test(s) I can do between the cheap but rapidly ineffectual small divisor tests, and the very expensive but effectively definative Miller Rabin? Last fiddled with by mathPuzzles on 2017-03-25 at 06:09 |
|
|
|
|
|
#4 |
|
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3×29×83 Posts |
Isn't 512 bits well within the range of APRCL test? Isn't ECPP only for thousands of digits?
And also aren't there other tests between Fermat and prime testing? E.g. a round or two of Miller-Rabin with perhaps a dash of BPSW? Edit: "Maple's isprime function ,[11] Mathematica's PrimeQ function ,[12] PARI/GP's isprime and ispseudoprime functions ,[13] and SageMath's is_pseudoprime function [14] all use a combination of a Miller-Rabin (Fermat strong probable prime) test and a Lucas test. Maxima's primep function uses such a test for numbers greater than 341550071728321 .[15]" Last fiddled with by Dubslow on 2017-03-25 at 06:37 |
|
|
|
|
|
#5 | |
|
Bamboozled!
"𒉺𒌌𒇷𒆷𒀭"
May 2003
Down not across
22×5×72×11 Posts |
Quote:
An approach you should follow is to time how long a division takes and how long a M-R test requires. That will give you an idea for how many small primes should be tried before moving on to M-R. Last fiddled with by xilman on 2017-03-25 at 08:31 |
|
|
|
|
|
|
#6 |
|
Mar 2017
2×3×5 Posts |
To give a sense of speed, a trial division on my PC takes 0.98 us. A single Miller-Rabin test takes 11279 us. So even after doing say 200 trial divisions, taking about 200 us, the MR test still takes over 50 times longer.
I did experiment with using a few rounds of Fermat factoring (not a Fermat primality test, but a factoring loop). That is much faster than MR, but has such a poor chance at finding a possible factorization that it's a net efficiency loss. It's no surprise, but it was my only thought of a possible test inbetween trials and MR. Hence, my question here.. is there any faster primality screening test than Miller-Rabin other than trial division? It seems as if there is not. |
|
|
|
|
|
#7 | |
|
"Forget I exist"
Jul 2009
Dumbassville
26×131 Posts |
Quote:
|
|
|
|
|
|
|
#8 | ||
|
"Dana Jacobsen"
Feb 2011
Bangkok, TH
22·227 Posts |
Quote:
- 0.0002s BPSW - 0.10s ECPP (ecpp-dj) - 0.15s APR-CL (APRCL1.1) - 0.24s APR-CL (Pari 2.7.0) - 0.45s ECPP (Primo) - ~30 hours fast AKS - >8 years typical AKS from internet ECPP runs fine with small sizes. Primo has a fair bit of overhead (e.g. GUI) that makes it not as competitive until 300-500 digits, but that means, say, 1.5 seconds instead of 1 seconds at 250 digits. My ECPP is slightly faster than current APR-CL implementations on smaller sizes. Primo is *much* better once into thousands of digits. PFGW / gwnum hasn't hit its stride at 512-bits, and last I measured was slower than GMP at this size (the crossover is somewhere in 1500-4000 digits). Quote:
Fermat Euler-Jacobi Euler-Plumb (defined only for base 2) Miller-Rabin Each of these, for the given base, are strictly stronger -- the pseudoprimes for one test are a subset of the ones above it. The "Euler-Plumb" test is about 5% faster than the others. In GMP, the Fermat test isn't really any faster than the others. So there really isn't any reason I know of to want to run it first at this size. Once we're into larger sizes where gwnum is the fastest library then Fermat is the obvious choice since it's the only optimized implementation. You could do multiple M-R tests, or go with BPSW like most math programs (and increasingly many programming language's math modules). It's less than the cost of 3 M-R tests, and is nicely split so most composites are found after the M-R test. In fact since it includes an M-R test, there is really no reason to do a Fermat test first (again assuming you're not using PFGW/gwnum where you have no other sane option). There's Khashin's Frobenius-style test, Paul Underwood's Frobenius-style test, or quadratic Frobenius tests with random or intelligent parameter choices. There are various others that I haven't seen implementations for (e.g. Grantham's QFT and various followons like EQFT, SQFT, MQFT). The first few have a cost not dis-similar to BPSW but cost more for composites (they're not split into two like BPSW). Last fiddled with by danaj on 2017-03-25 at 16:27 Reason: clarify PFGW at 512-bits |
||
|
|
|
|
|
#9 | |
|
"Rashid Naimi"
Oct 2015
Remote to Here/There
2,063 Posts |
Quote:
For small integers it is normal for the trial by division to be faster than many other methods. But it slows down much more rapidly than many other tests as the numbers get bigger. Also there is an element of luck involved with any factoring method. Some numbers might factor instantly, while others of the same approximate size might take hours. |
|
|
|
|
|
|
#10 |
|
"Dana Jacobsen"
Feb 2011
Bangkok, TH
11100011002 Posts |
AKS is never the right answer unless you're writing a paper mentioning asymptotic complexity. It's tangential, but it comes up in forums a lot. APR-CL and ECPP are usually the right practical answer if not looking at special forms or numbers under 128 bits. You may be fine with BPSW or BPSW plus some extra tests rather than a proof. Or not.
The most efficient sequence will differ greatly based on the input size. In your case you have a size (about 512 bits) already chosen, so we can skip the many differences. The best sequence will also differ based on your library. What's fastest in C+GMP may not be exactly the same if your implementation is using Java's BigInteger library. Since you have lots of inputs, it may be worthwhile looking for some way to reduce them. E.g. sieving if they're in sequence. If you're doing something like trying to generate random primes, there's literature on faster ways to do this. It might be possible to do a form of construction that doesn't produce results with small factors, or barring that, you can do it such that a few native modulos can be done before you even construct the candidate. Assuming we just want a "is_prob_prime" function for ~512 bits, what should we do? I happen to have written such a function. You could also look at Pari's choices, though they're in need of updating (ispseudoprime has a lot of crufty code). I do pretests followed by BPSW. BPSW has a number of desirable properties: it's strong (even intentionally crippled versions of the test (e.g. Chen/Greene) have no known counterexamples), it's fast for its strength, and it starts with a base-2 M-R test so almost all composites take even less time. I happen to use the extra strong Lucas test, as it's fastest as well as having fewer pseudoprimes. Pari uses a very slightly weaker version of the same. For thoughts on pretests, see Menezes (section 4.45) and Park (ISPEC 2005). Both discuss how the optimal trial division limit relates to the speed of the next test (e.g. the M-R test in my case). Implementation changes things a lot here. It turns out GMP's gcd function is really efficient for this purpose. Much better than looping over divisible_ui_p. So my implementation at 512 bits is: 1. mpz_even_p(n) 2. mpz_gcd_ui with two 64-bit ints 3-53 and 59-101 (or 3-29 if ui is 32 bits) 3. mpz_gcd with premade primorial of odd primes to 997 4. mpz_gcd with premade primorial of primes under 10000 / previous primorial The second gcd is done for sizes over 300 bits, and I do a third one for primes up to 40k for sizes over 700 bits. After 1600 bits I do more trial division though without a primorial. Larger ones use a remainder tree. Your tuning points may vary. Since part of th point is that we make the primorials once and amortize that cost (of course we use an efficient method to construct them), we don't really get to fine tune the size much. For your purpose, knowing it will be used millions of times so construction and memory are not a big deal, if you use GMP it might be worthwhile making a larger primorial and using that. For truly random inputs it isn't worth doing, but if you did want proofs, after the pretests you could see if the number is in LLR or Proth form, both of which have fast proofs (k * 2^n +/- 1). Last fiddled with by danaj on 2017-03-25 at 16:31 |
|
|
|
|
|
#11 | |
|
"Dana Jacobsen"
Feb 2011
Bangkok, TH
11100011002 Posts |
Quote:
You're pretty much left with trial division. You want to optimize it, of course, and there are some ways to consider. E.g.: - for small factors, do a single bigint modulo of a native-size product, then you can do native modulos of the individual factors. GMP will already try to do this for some of its functions. - use gcd if your library has an efficient implementation. It's just trial division but lets you use all the effort the library authors put into making it fast. - remainder trees, aka tree sieve. Better used for factoring where we want the divisors, as we could, I believe, get this easier and likely more efficiently for primality just by using a proper vector product routine and gcd. As mentioned in another post, once past ~2000 digits the gwnum library is faster and everything changes. A lot of the people in this forum work on numbers much larger than this, so that will change things a lot. At that size you're typically using PFGW's Fermat test, which at, say 100k digits, is much faster than anything GMP has. [1] pseudoprimes less than 2^64: Code:
118,968,378 Fernat base 2 63,912,692 Euler base 2 (aka Euler-Jacobi) 48,236,911 Euler-Plumb (Colin Plumb Euler Criterion) 31,894,014 Miller-Rabin base 2 |
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Miller-Rabin test questions | firejuggler | Miscellaneous Math | 6 | 2011-12-22 05:57 |
| Number of Rabin-Miller Non-Witnesses | ATH | Math | 0 | 2011-07-30 16:42 |
| Faster LL tests, less error checking? | Prime95 | Software | 68 | 2010-12-31 00:06 |
| Miller-Rabin Strong Probable Prime Test (SPRP) | fenderbender | Miscellaneous Math | 22 | 2010-11-11 01:04 |
| Why no Rabin-Miller Tests ? | Axel Fox | Math | 13 | 2004-06-28 16:07 |