![]() |
|
|
#1 |
|
Jul 2009
19 Posts |
Hello,
I didn't really know where to put this question so I figured this board would be the best... Anyway, I am using NewPGen Version 2.82 and Jean Penne's LLR Version 3.7.1 to find prime numbers of the form k*b^n+1 where b = 2 and k is a constant number. I am sieving all n's from 2 to 10,000,000. Its a pretty large k (7 digits) so I'm pretty sure that no one else is testing it right now. After a day or so of sieving I decided to do the LLR tests for the small numbers until it takes about 10 seconds to test per number. Then I will sieve some more and so on. I know that when the FFT length goes up, the time taken to test goes up considerably, especially at the higher FFT lengths. So this brings me to the question. Is there some generic formula to calculate when the FFT length gets bigger so I can more efficiently sieve, test, sieve etc? If you need more info on my question just ask. SAB |
|
|
|
|
|
#2 |
|
A Sunny Moo
Aug 2007
USA (GMT-5)
186916 Posts |
As I understand it, the logic behind FFT length selection is rather complex and is in fact not even determined by LLR itself, but rather by the underlying gwnum floating-point math library. The most foolproof way to determine what FFT will be used for a particular number is to try testing it with LLR; it outputs the FFT length chosen at the beginning of the test, so all you have to do to find the FFT is start the test, not necessarily finish it.
![]() Note that despite the rather large jumps in testing times at the FFT jumps, they don't actually make a huge difference in terms of optimal sieve depth; over a large n-range, things will average out such that testing time scales proportionately to the square of the n-range, regardless of which FFT jumps are where. Some people like to calculate optimal sieve depths based on FFT jumps and break off ranges likewise, but you don't really gain a huge amount of efficiency compared to breaking off at "round" numbers (say, multiples of n=100,000). Most of the big projects around here use the latter strategy since it's easier to manage. BTW, LLR 3.7.1 is rather outdated by now; 3.8.1 is the latest version and is about 2-4% faster. (It may not sound like much but it is a noticeable improvement. ) Also, srsieve has supplanted NewPGen as the sieving program of choice for k*b^n+-c numbers; it comes in three variants (srsieve, sr1sieve, and sr2sieve), which are much faster than NewPGen. For a single k like you're testing, I would use sr1sieve--it takes a NewPGen format file as input, so you can stop NewPGen wherever you're at and pick up right where you left off with sr1sieve. sr1sieve can be found at http://sites.google.com/site/sr1sieve/.
|
|
|
|
|
|
#4 | |
|
3×59 Posts |
Quote:
|
|
|
|
|
#5 |
|
Jul 2009
19 Posts |
Thanks for your help. I downloaded the newer version of LLR and sr1sieve. Sr1sieve is way faster for fixed k! Its hard to tell exactly but it appears to be about 5x to 10x faster just by looking at the time it takes to cross off factors...
And the FFT links are helpful as well. SAB |
|
|
|
|
|
#6 |
|
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
2×33×109 Posts |
I am pretty certain that sr1sieve is multithreaded on linux with -t. Sr2sieve certainly is. There will be a small performance hit using it multithreaded. It is also possible on both linux and windows to run multiple instances of sr1sieve sieving different ranges. If you do this you will want it to output the factors found rather than the resulting sievefile because it will be difficult to combine the resulting sievefiles otherwise.
|
|
|
|
|
|
#7 |
|
Jul 2009
19 Posts |
This next question is similar so I'll keep it in the same thread. Would it be faster to: Sieve and then LLR or Sieve, do a probable primality test (like the Miller-Rabin test) and then LLR third?
And if the second were true, what test and program should I use? SAB |
|
|
|
|
|
#8 |
|
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
2·33·109 Posts |
LLR is just as fast as prp(Miller-Rabin) with version 3.8.1.
|
|
|
|