20090121, 10:58  #1  
May 2007
Kansas; USA
10100001101011_{2} Posts 
Optimal sieving of large nranges
Quote:
Ozzy, Here is what I would recommend: 1. Begin sieving the entire range. 2. LLR/Phrot test one candidate at 70% of the n=50K100K range, i.e. n=85K, and make note of the testing time. 3. Sieve until your sieving removal rate is the same as the test in #2. 4. LLR/Phrot test the n=50K100K range being sure to remove k's with primes from the entire n=50K250K sieve file as you go. 5. Delete n=50K100K from your sieve file and begin sieving the entire rest of the file. 6. LLR/Phrot test one candidate at 70% of the n=100K150K range, i.e. n=135K, and make note of the testing time. 7. Sieve until your sieve removal rate is the same as the test in #6. 8. LLR/Phrot test the n=100K150K range being sure to remove k's with primes from the entire n=100K250K sieve file as you go. 9. Repeat steps 5 thru 8 for the n=150K200K and 200K250K ranges. In other words, when sieving, always sieve the entire remaining part of the file and make sure that k's with primes are removed from the entire file. Because of the extremely long testing times of such a huge base, I highly recommend breaking it up in n=50K pieces like this. Normally, I'd just break off one piece, perhaps by doing n=50K125K followed by n=125K250K but here, the CPU time savings is significant to keep it in smaller pieces. And finally: From a math/CPU perspective: The most CPUefficient way to sieve and primality test is to run 35 prime tests, sieve the rest a teeny bit, run another 35 tests, sieve the rest a bit more, etc...you get the point. If there was an automated process that would switch backandforth between prime testing and sieving, that would be excellent. But since we have to have manual intervention, it's up to you how much you want to intervene and break off pieces. I don't know, maybe you'll want to go to n=25K pieces as you get to n>150K. It's totally up to you but I think this should give you a clear understanding of the general way to optimally sieve a large nrange. Gary Last fiddled with by gd_barnes on 20090121 at 11:05 Reason: admin edit 

20090121, 16:29  #2 
Apr 2008
Antwerp, Belgium
111001_{2} Posts 
Thanks for your explenation.
Maybe it would be interesting to start sieving again after each fft length jump ... On this kind of a base these increses in testing times can be quite painfull to be honest. On one fft length increse around n=49,5k the testing times incresed from about 995s to 1230s .. ouch! If there just would be some way to find out when they happen ... I'm testing this base one a dual core machine, so it might be interesting to LLR test on one core and sieve on the other. If done properly both would keep eliminating pairs at about the same speed. 
20090122, 04:38  #3  
May 2007
Kansas; USA
3×3,449 Posts 
Quote:
Very astute analysis! Technically, I could have said the same thing but was trying to make things easier. If your main goal is to reduce CPU time, you would be correct to sieve, primality test, break off range, etc. on each fftlen change. One mathematical note there: If you are breaking off at every fftlen jump, you only need to test a candidate at 50% of the nrange (vs. 70%) between each jump to determine the sieve dropping rate to stop at. That's because the testing times are increasing in a straight line between the jumps. It's those jumps that gives them their exponential increase (i.e. varies with the square of the nvalue) over the long run. To get to the fftlen jump points, you could do it by brute force by running many tests for a few secs. each. LLR will tell you what the fftlen is. An easy way to "traverse" down to where the jump is: Let's say the actual jump is at n=10K and that the fftlen jumps from 6K to 7K. You suspect it is somewhere between n=9K and 12K so: Split the difference between where you think it is and test n=10500. Since the fftlen there is 7K, you go down by splitting the difference between n=10500 and 9K, i.e. n=9750. There, the fftlen is 6K so you go up. Splitting diff. between n=9750 and 10500, you have n=10125, etc. until you get as close as you want to the actual jump point. Doing it this way, you could nail every fftlen jump within n=100 for a very large nrange such as yours with about 1015 mins. manual effort. To nail it down to the exact n for every jump might require 2030 mins. So if you desire almost perfect optimal CPU time and have time to mess around with this plus going back and forth between sieving and primality testing, then doing this is definitely the way to go. Gary 

20090125, 15:36  #4  
Jul 2003
wear a mask
2^{3}·197 Posts 
Quote:
2. LLR/Phrot test the largest candidate in the range and make note of the testing time. You don't actually have to complete the test, it can be estimated from the iteration times. 3. Sieve until your sieving removal rate is onethird of the time in #2. Last fiddled with by masser on 20090125 at 15:41 

20090125, 15:37  #5  
Jul 2003
wear a mask
628_{16} Posts 
Quote:


20090125, 18:03  #6 
"Curtis"
Feb 2005
Riverside, CA
4747_{10} Posts 
Gary's analysis does not produce an optimal sieve, if by optimal you mean the sieve depth that results in the shortest project length.
For a large nrange, such as 50k250k that Mr Ozzy mentioned, a much deeper sieve pays dividends. Try this experiment: Sieve to Gary's suggested depth. Before breaking off a piece for primality work, set it to sieve an additional few T (a range with 200 or so factors). Using the v switch, note the expected number of factors and the expected time to completion. Now, remove the chunk you plan to LLR/Phrot. Repeat the above sieve setup, again just to note the expected number of factors and expected time to completion. The difference between these times and expected factors is the factor cost to continue sieving the range you are considering breaking off. If you divide those numbers, you will almost certainly find that the factor cost is lower than LLR time, thus continuing to sieve with the small piece included is wise. I have found through many such tests that breaking off a small piece is most efficient when the sieve removes factors at just over half the speed (double the time!) of a primality test on the smallest candidate. This assumes you plan to test the entire file eventually; with this project, one should sieve less deep to correct for the speedup entailed in finding a prime. You could estimate the probability of finding a prime in your entire sieve and correct for this reduction in project length; or just sieve some arbitrary fraction less deep. I have not done the math to correct for this effect yet, so I can't provide more detail. This improvement in optimal depth calculation only really matters for sieves with large nranges and VERY long primality tests. 40,000 seconds per test at the large end of the range certainly qualifies! For sieves with longest tests in the 10001500 sec range, the project time saved by using this method is not significant. Also, "significant" is relative I do not claim this thinking is necessary, merely that it is more optimal than Gary's method. Pointing out flaws in my logic welcomed, as I use this method for a very large sieve that has run over many dozens of CPUmonths. 
20090125, 18:56  #7 
Apr 2008
Antwerp, Belgium
3×19 Posts 
Just a remark, but it might matter a lot.
So far I only have access to 32 bit machines. If I recall correctly sieving benefits a lot more from 64 bit as LLR or Porth does, or am I wrong? This might tilt the scale more to breaking off sooner again. Also keep in mind that every k will be removed from the sieve once a prime has been found. This happened before when I sieved from 10k to 50k (took quite a while) and found a prime for one of the k at n=10468. At that time I had 20k left so I wasted almost 5% of my sieving time. This is why I would like to start PRP testing as quick as possible (in very small chuncks). If I happen to find a prime near 50k (which is quite likely since my last prime was at n=32421) I wasted 10% of the time sieving ... 
20090125, 20:27  #8  
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
3^{3}·7·31 Posts 
Quote:
do the cpus have virtualization technology see http://mersenneforum.org/showthread.php?t=10991 from post 11 

20090126, 07:15  #9  
Apr 2008
Antwerp, Belgium
111001_{2} Posts 
Quote:


20090126, 08:40  #10  
"Curtis"
Feb 2005
Riverside, CA
47×101 Posts 
Quote:
Second, sieve time scales with the square root of number of k's, so a 10k sieve is roughly 6% slower than a 9k sieve. Consider how long you spend in total sieving. Take 6% of that, and that is in effect the "wasted" time if you find a prime at the very outset of LLR/Phrot work. Since you may not find a prime at the outset, it is hard to consider the effort wasted ahead of time, you know? If you can model the expected number of primes in a particular block, you can adjust the sieve depth. It's not a very big adjustment for a small number of k's like your effort, as the chance of prime is rather small. Finally, the optimal depth is not a sensitive calculation you can oversieve or undersieve by 20% and hardly change the total project length. If you feel better sieving less, sieve less. My method suggests sieving ~twice as deep as LLR time before breaking off a piece, so mildly undersieving would be perhaps 50% or 60% longer than LLR time for the smallest candidate. Curtis 

20090126, 19:25  #11 
Jan 2005
479 Posts 
* warning* obsolete post ahead :
(Bugger... I should read the entire thread before posting...) Don't forget that finding a prime will reduce the time needed to llr the rest of the broken off file. Thus you'll need to sieve a little bit less deep that Curtis suggests. (But anyways, it's all human intervention, and timeconsuming, so any guess is a good guess, as long as it is in the right ballpark :) ) Last fiddled with by michaf on 20090126 at 19:26 Reason: added warning 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
optimal B1  MiniGeek  Factoring  4  20110527 12:19 
Sieving a Large Number of k's  Dougal  Conjectures 'R Us  1  20100617 21:48 
Optimal Sieving Time  jasong  Sierpinski/Riesel Base 5  10  20090125 15:56 
optimal parameters for GMPECM , oe+ , I  Walter Nissen  GMPECM  16  20070320 19:35 
Optimal ECM Escalation?  wblipp  ElevenSmooth  16  20040813 19:01 