![]() |
[QUOTE=3.14159;223565]901-10000 digits: PARI's isprime function.[/QUOTE]
This runs an APR-CL test when N+/-1 cannot prove primality. This would probably take years for a 10000 digit number. If you really want to prove "random" 10000 digit numbers prime, Primo is faster (AFAIK, nobody has ever proven a 10000 digit number prime with PARI). |
[QUOTE=10metreh]This runs an APR-CL test when N+/-1 cannot prove primality. This would probably take years for a 10000 digit number. If you really want to prove "random" 10000 digit numbers prime, Primo is faster (AFAIK, nobody has ever proven a 10000 digit number prime with PARI).
[/QUOTE] Which is why I use Miller-Rabin instead for randoms. It only takes a few minutes. And, I cannot generate 10k-digit *randoms*. They're often special-form primes, which are easily proven or disproven by the isprime function. |
[QUOTE=3.14159;223578]Which is why I use Miller-Rabin instead. It only takes a few minutes. And, I cannot generate 10k-digit *randoms*. They're often special-form primes, which are easily proven or disproven.[/QUOTE]
Ah, so your list of programs to use is for special-form numbers, not *any* number of the specified size. |
[QUOTE=10metreh]Ah, so your list of programs to use is for special-form numbers, not *any* number of the specified size.
[/QUOTE] Yes, they are all special-form primes. It would be a headache trying to test random 10k-digit numbers, trying to prove primality. I think I posted a 1429-digit random earlier. |
[quote=3.14159;223574]You have to set up a file just for the sieving. I think srsieve's programmer warns that NewPGen maybe/is faster than it on the readme.txt file for the program. (I think this was solely for srsieve.)
I'll take a closer look at the readme.[/quote] If so, that warning is quite old and outdated; it would only apply to long gone versions of srsieve. I'm not sure what you mean by "have to set up a file just for the sieving"; usually the way I sieve is with a command like this: srsieve -G -n 25000 -N 100000 -P 1e6 9940*1999^n+1 This sieves 9940*1999^n+1 for n=25K-100K up to p=1e6 and outputs a NewPGen format output file. (-G produces one file no matter how many sequences you sieve at once, whereas -g produces one for each sequence a la NewPGen.) Once that's done, I would run the output file (which would be named t16_b1999.prp if I remember correctly) through sr1sieve as follows: sr1sieve -i t16_b1999.prp -o t16_b1999.prp -P 10e12 In the above example, it picks up where the file was last left off (in this case, at p=1e6) and continues up to the specified -P depth (in this case, 10e12--not necessarily optimal depth in your case but picked as an arbitrary ballpark figure by me for this example). You can monitor the sec/factor rate as it's output along the way, and when that rate is about the same as the time it takes to PRP test a candidate 75% or so through the range, you're done and can press Ctrl-C to stop the sieve right there and then. The reason for using the two separate programs is that sr1sieve is incapable of sieving from the ground up (i.e., from p=2). More specifically, p must be greater than both k and b. I usually sieve to a nominal depth like p=1e6 (which takes only seconds, maybe a few minutes depending on the sieve) with srsieve, and then switch to sr1sieve since it's around that point where the latter becomes clearly faster. Note also that sr1sieve can only do one sequence at a time, while srsieve can do multiple sequences. Since you're used to NewPGen, sieving only one sequence at a time is nothing new, but once you start doing more than two sequences it's faster to sieve them together than separately. (For exactly two sequences it's about the same.) For multiple sequences, you start with srsieve as usual (just keep appending more sequences to the end of the command line as needed, or if there's a lot of them, stick them in a text file one per line and put the filename on the command line instead), then use sr2sieve, which is operated a little differently than sr1sieve but does pretty much the same thing in the end. If you're sieving one sequence at a time, you have alternately the option of starting the sieve with NewPGen instead of srsieve (for a nominal depth like p=1e6, NewPGen's slower speed is not going to make a significant difference), then continuing with sr1sieve as before. If you need any help running the sr*sieve applications, feel free to drop me a line--they can be a little tricky at first but once you get the hang of it it's dead easy. :smile: |
For randoms, I only prove primality for small primes, using either Miller-Rabin(10-30 iterations later)/trial division/APRT-CL.
P.S: I say 10-30 iterations, because the aim is to [B]prove[/B] the number prime, rather than leave it off as a PRP. The odds of failure are at 1 in 1152921504606846976. (It guarantees primality for the very small primes.) P.P.S: Bumped into a prime: 3704825883783142984444157877899665197032083444319256075703418880001. |
[QUOTE=10metreh;223577]This runs an APR-CL test when N+/-1 cannot prove primality. This would probably take years for a 10000 digit number. If you really want to prove "random" 10000 digit numbers prime, Primo is faster (AFAIK, nobody has ever proven a 10000 digit number prime with PARI).[/QUOTE]
Yes, Pari is unsuitable for 10,000-digit primality proofs. But their APR-CL implementation is impressive: it beats Primo's ECPP much further than theory might suggest. (Sorry, don't have numbers handy, but I've tested this pretty carefully.) |
[QUOTE=CRGreathouse]Yes, Pari is unsuitable for 10,000-digit primality proofs. But their APR-CL implementation is impressive: it beats Primo's ECPP much further than theory might suggest. (Sorry, don't have numbers handy, but I've tested this pretty carefully.)
[/QUOTE] Whenever you get the chance to, post the data here. Also: To the prime search: I figured that there is a 4th personal record: Largest prime found via an arithmetic progression. Wait.. Isn't every prime p>2 part of an arithmetic progression? In that case, the largest known prime number in an arithmetic progression is 2[sup]43112609[/sup] - 1, which has 12978189 decimal digits. Can't wait for a 20 million digit discovery! 3: 2(1) + 1 5: 2(2) + 1; 6(1)-1 7: 2(3) + 1; 6(1) + 1; 4(1) + 3 11: 2(5) + 1; 6(2) - 1; 10(1) + 1; 9(1) + 2; 3(3) + 2; 4(2) + 3; etc. |
[QUOTE=3.14159;223592]Wait.. Isn't every prime p>2 part of an arithmetic progression?[/QUOTE]
Good question, equivalent to Sloane's [url=http://oeis.org/classic/A084704]A084704[/url] being well-defined. I don't have an answer, though integral dx/log x diverges so heuristically there should be no problem. |
[QUOTE=CRGreathouse]Good question, equivalent to Sloane's A084704 being well-defined. I don't have an answer, though integral dx/log x diverges so heuristically there should be no problem.
[/QUOTE] Every odd prime is indeed part of an arithmetic progression, either 2n + 1, or 6n ± 1. It's in fact impossible for an odd prime [B]not[/B] to be in an arithmetic progression. Speaking of primes: A prime-numbered post in the thread: 197. Also the penultimate prime before 200. |
[QUOTE=3.14159;223613]Every odd prime is indeed part of an arithmetic progression, either 2n + 1, or 6n ± 1. It's in fact impossible for an odd prime [B]not[/B] to be in an arithmetic progression.[/QUOTE]
Ah. I assumed you meant something nonobvious: that every odd prime was a member of a finite arithmetic progression of primes (with 3+ terms). Clearly every integer is in the arithmetic progression with common difference 1... |
| All times are UTC. The time now is 22:03. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.