View Single Post
Old 2012-02-07, 12:18   #26
jrk
 
jrk's Avatar
 
May 2008

3×5×73 Posts
Default

Quote:
Originally Posted by Mini-Geek View Post
Could someone remind me why it was recommended that we start at 7M instead of somewhere lower? IIRC (and if I didn't have other factors confusing the issue, such as CPU sharing), when I had nearly finished up to 26M, I started doing from 6M to 7M and saw greatly improved rels/second reported (roughly 0.2 sec/rel to 0.12 sec/rel), so it would seem to me that sieving the lower end more would be better.
My results differ from yours. Here are the timings from the trial I did:

Code:
total yield: 1475, q=7001003 (0.06635 sec/rel) 
total yield: 1448, q=9001001 (0.06850 sec/rel) 
total yield: 1431, q=11001007 (0.07258 sec/rel) 
total yield: 1949, q=13001029 (0.06811 sec/rel) 
total yield: 1498, q=15001001 (0.07250 sec/rel) 
total yield: 1253, q=17001007 (0.07443 sec/rel) 
total yield: 1148, q=19001011 (0.07987 sec/rel) 
total yield: 1490, q=21001021 (0.08335 sec/rel) 
total yield: 1281, q=23001007 (0.07738 sec/rel) 
total yield: 1617, q=25001029 (0.08253 sec/rel) 
total yield: 987, q=27001003 (0.08689 sec/rel)
That's not quite as drastic a change over the range as the 0.12 to 0.2 you reported. This was done on a C2D@3.4GHz using the 64bit Linux gnfs-lasieve4I14e.

Re: the efficiency of sieving smaller Q... You must also consider that you will encounter a greater rate of duplication overall when you start sieving at smaller Q, and this will reduce the speed gain. I can provide real data to show this, but you can test it for yourself with the data you have now.

Last fiddled with by jrk on 2012-02-07 at 12:19
jrk is offline   Reply With Quote