![]() |
![]() |
#122 | |
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
603110 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#123 |
"Mark"
Apr 2003
Between here and the
2×592 Posts |
![]()
What are you trying to accomplish? IMO, skipping sieving will cost you a significant amount of time.
|
![]() |
![]() |
![]() |
#124 | ||
"Gary"
May 2007
Overland Park, KS
5×17×139 Posts |
![]() Quote:
Quote:
I have to agree with David (henryzz) and Mark here. I do not suggest running the script for anything other than n=1 to 2500 or whatever depth you determine is best to start a new base. 2 reasons: 1. The script is not designed to handle specific k's remaining. It would need significant redesign to accomplish that. 2. Sieving the k's remaining at n=2500 and testing the resultant sieve file with LLR/PFGW/PRPnet with the stop-on-prime option set on is much more efficient. Last fiddled with by gd_barnes on 2012-09-21 at 17:54 |
||
![]() |
![]() |
![]() |
#125 |
Aug 2012
2510 Posts |
![]()
I don't want to skip sieving, I want to chunk it.
Run the new base script to n = 1500. Sieve with srsieve to n = 3000. Sieve with sr2sieve to optimal depth for n = 3000. Test the resultant candidates. For those that remain, sieve with srsieve to n = 6250. Sieve with sr2sieve to optimal depth for n = 6250. Test the resultant candidates. For those that remain, sieve to n = 12500...For those that remain, sieve to n = 25000. I think by writing it out that way, I figured out what I missed (didn't sleep last night). I can just run srsieve on the remaining candidates like normal (haven't yet had to do this). Take the old set of remaining bases (pl_remain output from the script), remove all which were primed, and then use that as the input to the next round with srsieve. Depending on the ck and total time, I might not use that many chunks, but even two seems better than my current method, where one of my machines is running sr2sieve from n = 1000 to n = 25000. Sorry for the confusion. |
![]() |
![]() |
![]() |
#126 |
Jun 2009
22·52·7 Posts |
![]()
As far as i know (and have experienced) the most effective way is to sieve as many k's and as large an n-range as you are going to test (except for really huge numbers of k's).
So I (and probably everybody else) would recommend using one big sieve file for n=1000 to 25000. If you have many k's, sieve to the optimal depth for, say, n=5000, test to n=5000, remove all primed k's from the sieve file (quick and easy with srfile -d), continue sieving to the optimal depth for n=10000, test to n=10000, remove k's and so on... EDIT: I just realized that's probably what you meant. But I'm not sure if you were planning to start new sieve files each time, i.e. have only n=1000 to 5000 in the first sieve file, n=5000 to 10000 in the second and so on. It's more efficient to start with the whole range and take out k's that are primed. Last fiddled with by Puzzle-Peter on 2012-09-21 at 19:31 |
![]() |
![]() |
![]() |
#127 |
"Mark"
Apr 2003
Between here and the
2·592 Posts |
![]()
I understand now.
This is what you should consider doing. 1) Take all k and sieve from n to N where N = max N you intend to test as part of your overall reservation. Sieve to 1e6 or some value of p > max N. 2) Use sr2sieve to sieve to the optimal rate of k*b^m+/-1 where n < m < N. 3) Run "srfile -k factors.txt -w sr_xyz.pfgw" to remove k/n that have a factor. 4) When sieving is done, run pfgw with number_primes (or llr with its corresponding option) for the range of n to m. 5) Run "srfile -d pfgw.log" or "srfile -d pfgw-prime.log -w sr_xyx.pfgw" (again, assuming use of pfgw). This will eliminate sequences from your sieve file for which you found a prime in step 4. 6) Set n = m. 7) Repeat from step 2. This will reduce the number of steps you need to do to complete the range. Will you sieve some k to a far higher n than you need? Yes, but increasing the size of the range of n isn't that costly. Last fiddled with by rogue on 2012-09-21 at 19:38 Reason: puzzle-peter beat me to it |
![]() |
![]() |
![]() |
#128 |
"Gary"
May 2007
Overland Park, KS
101110001001112 Posts |
![]()
Adding some details to what Peter and Mark said, here is what I do:
Run the script to n=2500, sieve all remaining k's for n=2500-25K to optimal depth for the range of n=2500-10K, test to n=10K, remove k's primed and the the range of n=2500-10K in the big sieve file, sieve n=10K-25K to optimal depth, and finally test n=10K-25K. CPU-wise, you might be able to make a case for breaking off more pieces but IMHO it's too much hassle to do so. Last fiddled with by gd_barnes on 2012-09-21 at 20:11 |
![]() |
![]() |
![]() |
#129 |
Aug 2012
52 Posts |
![]()
Got it. Thanks everyone.
When running sr2sieve, I'm seeing removal rates oscillate quite a bit. Over some time period (scrolling up a number of pages in my terminal window), I see it oscillating from 6 seconds per factor up to 14 seconds per factor, up and down again pretty often. Optimally, I would want to stop it when the removal rate is ~10 seconds, but it seems like I don't want to stop when it first hits the optimal rate, but some time after. Is there any guidance here? |
![]() |
![]() |
![]() |
#130 | |
"Mark"
Apr 2003
Between here and the
1B3216 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#131 | |
"Curtis"
Feb 2005
Riverside, CA
2·32·313 Posts |
![]() Quote:
Assume sieve time scales with the square root of n-range; that is, sieving 100k to 500k would take sqrt(2) times as long as sieving 100k to 300k, while sieving 300k to 500k would take the same time as sieving 100k to 300k. Let's say there is a 60% chance of prime in 100k-300k. Compare sieving 100-500k at once, versus 100-300k followed by 300-500k: 60% of the time, we find a prime in 100-300, and the extra effort of 300-500 is wasted. This extra effort was 40% of the time taken to sieve 100-300. 40% of the time, we don't find a prime in 100-300, and having a file to 500k saves 60% of the time taken to sieve 100-300 versus starting a new sieve 300-500. So, if we consider sieving to the point where a file has a 60% chance of prime, we are ambivalent between sieving there or sieving twice as deep. This would seem to suggest the optimal sieve range is somewhere between those two points- that is, at a point higher than 60% chance of prime. However, it also illustrates that it hardy matters- we spend so little time sieving versus testing, and the efficiency curve is VERY broad around the optimal decisions both for n-range and p-depth. My intuition on this is that a file that produces 1 expected prime (that is, a 63% chance of prime) is optimal. I think my logic shows 60% is on the low side of optimal, but I lack the reasoning presently to demonstrate 63% is optimal. -Curtis Last fiddled with by VBCurtis on 2012-12-29 at 18:12 |
|
![]() |
![]() |
![]() |
#132 |
Quasi Admin Thing
May 2005
2×491 Posts |
![]()
Simple question:
Is srsieve version 1.0.5 working correct with removing algebraric factors? I'm asking because I'm currently in the process of sieving 4 k's to p=1P, but I'll have to start from scratch if too many n's has been removed. So can someone elaborate and tell me, weather or not this version of srsieve is working correct or not? Regards KEP |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Useless SSE instructions | __HRB__ | Programming | 41 | 2012-07-07 17:43 |
Questions about software licenses... | WraithX | GMP-ECM | 37 | 2011-10-28 01:04 |
Software/instructions/questions | gd_barnes | No Prime Left Behind | 48 | 2009-07-31 01:44 |
Instructions to manual LLR? | OmbooHankvald | PSearch | 3 | 2005-08-05 20:28 |
Instructions please? | jasong | Sierpinski/Riesel Base 5 | 10 | 2005-03-14 04:03 |