20090612, 05:13  #1 
Feb 2007
211 Posts 
Improving Sieving by 18%.
I decided to help with PSP sieving. http://www.mersenneforum.org/showthread.php?t=2666
when i downloaded the sieve file "Sievecomb" or Sob.dat i realized that the n values have not been truancted, hence i truncated n from n=1 to n=6mil as all the k's have been PRP'd upto n=6mil min. The resulting file was 14% lighter and 18% faster with sr2sieve. So here it is the new sieve file (unofficial) where n=6mil to 50mil, for all the K's. (The file is RAR'd use winrar or winzip to unzip the file.) http://www.sendspace.com/file/iwjqzl Now when you start sr2sieve you have to make one small change. On your command prompt. Code:
Before: Example sr2sieve s p 70115500e9 P 70123500e9 Now: sr2sieve i sr_2.abcd p 70115500e9 P 70123500e9 Thanks Cipher 
20090612, 05:30  #2 
Apr 2003
2^{2}×193 Posts 
I have not tried your file so far but there is at lesast one problem with it. The lower bound for sieving PSP is at 1.5M !!! As our second pass testing for all k has only reached the 1.5M level all factors above that are still very important.

20090615, 14:00  #3 
Dec 2004
299_{10} Posts 
Yes we have contimplated this before...
reducing to 6M is actually too much at a max the reducion would be 1.5M as lars said. Please remove your file Second the calculated increase should be [(41^6)/(50^6)]^0.5 = which is about 9% not 18%, which is a little weird. I think you might have removed a little too much. Also if the efficiency were really that different we should crop the top end. Bring it back to say 1.5M<n<3040M. But with all the effort we introduced shrinking to top end is not a good idea, we would probably never go back on do the 4050M we missed. 
20090616, 09:03  #4 
Apr 2003
772_{10} Posts 
As an information about the next steps.
I have at home the most recend dat files (all known factors removed) in a form starting at 991 and also starting at 1.5M. I will start a testrun under identical conditions to see the real speed changes. 
20090616, 10:34  #5  
Apr 2008
Oslo, Norway
7·31 Posts 
Quote:
Also, I'm planning on finding a prime this summer... Will that help? 

20090616, 15:51  #6 
Aug 2002
1000001101_{2} Posts 

20090616, 20:05  #7 
Mar 2006
1011110_{2} Posts 

20090616, 21:21  #8 
Apr 2008
Oslo, Norway
7·31 Posts 

20090617, 03:12  #9 
Dec 2004
13·23 Posts 
Joe and I spent god only know how many weeks and CPU hours perfecting that dat at 991<n<50M and found that a 4#M dat was most efficient???
Joe, perhaps you can remember and elaborate but the selection of a 50M dat was not haphazard. The discussion of if it should be shortened or lengthend is for another debate... Lars speed test would be a good one to start with, when Joe and I did it we used the old sieve client that predated JJsieve ( a significant speed improvement ). Perhaps the sieve speed is more directly related to nrange but I doubt it. In any case please continue with the current offical dat, the mess of not using it takes more CPU hours an manpower to clean up than one might expect. And certainly more than the user would save. I appologize for not having more time to invest in the testing, I'm working around 60 hours a week right now. 
20090618, 14:52  #10 
Apr 2003
2^{2}×193 Posts 
The testrun has finished. The gain was even smaller then expected.
The file with n>1.5M needed 151354 seconds The file with n=991 needed 152360 seconds. This would mean there is a gain of 0.66% by changing to a shorter file. I expect that there is a possible error of around +/0.2% due to the fact that the machine was in normal use for 3hours during the test. So the expected gain would be between ~0.45% and ~0.85%. Last fiddled with by ltd on 20090618 at 14:52 
20090701, 13:34  #11 
Apr 2008
Oslo, Norway
217_{10} Posts 
Please forgive me what I write here is totally moronic, but aren't all factors interresting with regards to having more complete data as to what candidates are not primes? LLR tests only say "this is not prime", while sieving says "this candidate is divisible by that factor and is not prime", right?
In my mind, this is a good reason not to cut down the sieve span. 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Improving the queue management.  debrouxl  NFS@Home  10  20180506 21:05 
improving factorization method  bhelmes  Computer Science & Computational Number Theory  7  20170626 02:20 
an idea for improving the factor table  ixfd64  PrimeNet  5  20131108 05:41 
Improving website speed  Unregistered  Information & Answers  1  20110402 02:17 
Improving the RAM allocation for Prime 95  Matthias C. Noc  Software  3  20040212 19:34 