20120921, 11:15  #122  
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
6031_{10} Posts 
Quote:


20120921, 12:47  #123 
"Mark"
Apr 2003
Between here and the
2×59^{2} Posts 
What are you trying to accomplish? IMO, skipping sieving will cost you a significant amount of time.

20120921, 17:53  #124  
"Gary"
May 2007
Overland Park, KS
5×17×139 Posts 
Quote:
Quote:
I have to agree with David (henryzz) and Mark here. I do not suggest running the script for anything other than n=1 to 2500 or whatever depth you determine is best to start a new base. 2 reasons: 1. The script is not designed to handle specific k's remaining. It would need significant redesign to accomplish that. 2. Sieving the k's remaining at n=2500 and testing the resultant sieve file with LLR/PFGW/PRPnet with the stoponprime option set on is much more efficient. Last fiddled with by gd_barnes on 20120921 at 17:54 

20120921, 18:24  #125 
Aug 2012
25_{10} Posts 
I don't want to skip sieving, I want to chunk it.
Run the new base script to n = 1500. Sieve with srsieve to n = 3000. Sieve with sr2sieve to optimal depth for n = 3000. Test the resultant candidates. For those that remain, sieve with srsieve to n = 6250. Sieve with sr2sieve to optimal depth for n = 6250. Test the resultant candidates. For those that remain, sieve to n = 12500...For those that remain, sieve to n = 25000. I think by writing it out that way, I figured out what I missed (didn't sleep last night). I can just run srsieve on the remaining candidates like normal (haven't yet had to do this). Take the old set of remaining bases (pl_remain output from the script), remove all which were primed, and then use that as the input to the next round with srsieve. Depending on the ck and total time, I might not use that many chunks, but even two seems better than my current method, where one of my machines is running sr2sieve from n = 1000 to n = 25000. Sorry for the confusion. 
20120921, 19:27  #126 
Jun 2009
2^{2}·5^{2}·7 Posts 
As far as i know (and have experienced) the most effective way is to sieve as many k's and as large an nrange as you are going to test (except for really huge numbers of k's).
So I (and probably everybody else) would recommend using one big sieve file for n=1000 to 25000. If you have many k's, sieve to the optimal depth for, say, n=5000, test to n=5000, remove all primed k's from the sieve file (quick and easy with srfile d), continue sieving to the optimal depth for n=10000, test to n=10000, remove k's and so on... EDIT: I just realized that's probably what you meant. But I'm not sure if you were planning to start new sieve files each time, i.e. have only n=1000 to 5000 in the first sieve file, n=5000 to 10000 in the second and so on. It's more efficient to start with the whole range and take out k's that are primed. Last fiddled with by PuzzlePeter on 20120921 at 19:31 
20120921, 19:38  #127 
"Mark"
Apr 2003
Between here and the
2·59^{2} Posts 
I understand now.
This is what you should consider doing. 1) Take all k and sieve from n to N where N = max N you intend to test as part of your overall reservation. Sieve to 1e6 or some value of p > max N. 2) Use sr2sieve to sieve to the optimal rate of k*b^m+/1 where n < m < N. 3) Run "srfile k factors.txt w sr_xyz.pfgw" to remove k/n that have a factor. 4) When sieving is done, run pfgw with number_primes (or llr with its corresponding option) for the range of n to m. 5) Run "srfile d pfgw.log" or "srfile d pfgwprime.log w sr_xyx.pfgw" (again, assuming use of pfgw). This will eliminate sequences from your sieve file for which you found a prime in step 4. 6) Set n = m. 7) Repeat from step 2. This will reduce the number of steps you need to do to complete the range. Will you sieve some k to a far higher n than you need? Yes, but increasing the size of the range of n isn't that costly. Last fiddled with by rogue on 20120921 at 19:38 Reason: puzzlepeter beat me to it 
20120921, 20:05  #128 
"Gary"
May 2007
Overland Park, KS
10111000100111_{2} Posts 
Adding some details to what Peter and Mark said, here is what I do:
Run the script to n=2500, sieve all remaining k's for n=250025K to optimal depth for the range of n=250010K, test to n=10K, remove k's primed and the the range of n=250010K in the big sieve file, sieve n=10K25K to optimal depth, and finally test n=10K25K. CPUwise, you might be able to make a case for breaking off more pieces but IMHO it's too much hassle to do so. Last fiddled with by gd_barnes on 20120921 at 20:11 
20120928, 19:12  #129 
Aug 2012
5^{2} Posts 
Got it. Thanks everyone.
When running sr2sieve, I'm seeing removal rates oscillate quite a bit. Over some time period (scrolling up a number of pages in my terminal window), I see it oscillating from 6 seconds per factor up to 14 seconds per factor, up and down again pretty often. Optimally, I would want to stop it when the removal rate is ~10 seconds, but it seems like I don't want to stop when it first hits the optimal rate, but some time after. Is there any guidance here? 
20120928, 20:10  #130  
"Mark"
Apr 2003
Between here and the
1B32_{16} Posts 
Quote:


20121229, 18:08  #131  
"Curtis"
Feb 2005
Riverside, CA
2·3^{2}·313 Posts 
Quote:
Assume sieve time scales with the square root of nrange; that is, sieving 100k to 500k would take sqrt(2) times as long as sieving 100k to 300k, while sieving 300k to 500k would take the same time as sieving 100k to 300k. Let's say there is a 60% chance of prime in 100k300k. Compare sieving 100500k at once, versus 100300k followed by 300500k: 60% of the time, we find a prime in 100300, and the extra effort of 300500 is wasted. This extra effort was 40% of the time taken to sieve 100300. 40% of the time, we don't find a prime in 100300, and having a file to 500k saves 60% of the time taken to sieve 100300 versus starting a new sieve 300500. So, if we consider sieving to the point where a file has a 60% chance of prime, we are ambivalent between sieving there or sieving twice as deep. This would seem to suggest the optimal sieve range is somewhere between those two points that is, at a point higher than 60% chance of prime. However, it also illustrates that it hardy matters we spend so little time sieving versus testing, and the efficiency curve is VERY broad around the optimal decisions both for nrange and pdepth. My intuition on this is that a file that produces 1 expected prime (that is, a 63% chance of prime) is optimal. I think my logic shows 60% is on the low side of optimal, but I lack the reasoning presently to demonstrate 63% is optimal. Curtis Last fiddled with by VBCurtis on 20121229 at 18:12 

20130811, 19:38  #132 
Quasi Admin Thing
May 2005
2×491 Posts 
Simple question:
Is srsieve version 1.0.5 working correct with removing algebraric factors? I'm asking because I'm currently in the process of sieving 4 k's to p=1P, but I'll have to start from scratch if too many n's has been removed. So can someone elaborate and tell me, weather or not this version of srsieve is working correct or not? Regards KEP 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Useless SSE instructions  __HRB__  Programming  41  20120707 17:43 
Questions about software licenses...  WraithX  GMPECM  37  20111028 01:04 
Software/instructions/questions  gd_barnes  No Prime Left Behind  48  20090731 01:44 
Instructions to manual LLR?  OmbooHankvald  PSearch  3  20050805 20:28 
Instructions please?  jasong  Sierpinski/Riesel Base 5  10  20050314 04:03 