20080326, 13:45  #1 
Nov 2003
2^{2}·5·373 Posts 
Lattice Optimization
I enclose some interested data and an observation.
My lattice sieve does not process all specialq's. After reduction if the lattice is too skewed or if the reduced coefficients are too big, I do not process that specialq. Too skewed means a condition number of 5 or higher. Too big means one of the reduced coefficients exceeded 2^19. Here is some data on the results of this optimization. I ran the siever for some specialq's near 200 million on 2,776+. In a fixed amount of time (4200 seconds): The code that rejects specialq's looked at 897 specialq's and rejected 439, processing 457 of them. The yield was 2262 total relations (.53 relations/second) The code that accepts all specialq's looked at 416 specialq's and processed all of them. The yield was 1113 total relations (.265 relations/second) This difference is striking. Note also that the average time to process each special q is higher when all of them get accepted. (9.2 seconds/q when rejecting, 10.1 seconds when accepting) I'd really like to hear comments from others who have run the Franke or msieve code. Do they see similar behavior????? 
20080326, 15:54  #2  
Tribal Bullet
Oct 2004
3^{3}×131 Posts 
Quote:
If nobody else responds by tomorrow I can try some tests. 

20080331, 12:21  #3  
Nov 2003
1110100100100_{2} Posts 
Quote:
might be interested in an optimization with such a dramatic impact. However, everyone seems to busy just blindly running code written by others to take an interest in the actual algorithms........ It's pathetic. 

20080331, 12:30  #4  
Jun 2003
The Texas Hill Country
2101_{8} Posts 
Quote:
Patience, Grasshopper. 

20080331, 12:59  #5  
Aug 2004
2·5·13 Posts 
Quote:
 I run my own lattice siever, not Franke's or msieve  I'm currently spending what little time I have for this hobby on trying to find a particularly annoying bug in my square root code When I start sieving again I may well be interested in investigating optimisations like this, since I'm certain my siever has plenty of scope for improvement. Chris 

20080331, 13:39  #6 
(loop (#_fork))
Feb 2006
Cambridge, England
13×491 Posts 
It's a nice optimisation, which for some reason I thought was already in the Kleinjung code (since the paper http://www.math.unibonn.de/people/thor/cof.ps includes the line 'Approximately 18% of the special Q have a highly skewed lattice and were discarded'); but looking at the source code for the ggnfsdistributed version of the siever indicates that the trick isn't in the version of 2004 June 13, unsurprising since the Kleinjung paper was produced for SHARCS 2006 a couple of years after that version appeared.
I have a rather newer version of the Kleinjung siever on a machine at home, which outputs the reduced ideals for everywhere that it sieves; I'm pretty sure even that doesn't throw away skewed lattices; I'll have a look at what the relation count per prime ideal looks like as a function of the condition number of the lattice. Last fiddled with by fivemack on 20080331 at 13:47 
20080331, 14:30  #7  
Nov 2003
7460_{10} Posts 
Quote:
Consider a square (or recantagular) lattice of area A. The number of lattice points in the interior is just A plus or minus a small error proportional to the perimeter. (i.e. the error is O(sqrt(A)) Now skew this lattice by an affine transform whose condition number is C. This skewed lattice now contains far fewer lattice points than the original. Indeed, as C gets large, the number of interior lattice points drops to zero. To process such a latttice we must incur the same overhead as any other lattice, but since there are few potential lattice points, there are few potential successful relations. Note also that the skewed lattice has the same Area as the original. For those of you who don't follow this I suggest drawing a picture of a long and skinny parallelogram on a piece of graph paper....... My code does the following: Look at the largest reduced coefficient. If it is too large, discard this q. (A very large coefficient in the reduced lattice means that the norms will be large and hence unlikely to be smooth). I use 2^19 as the cutoff. Estimate the condition number. If greater than 5, discard this q. 

20080331, 14:45  #8 
Aug 2004
2·5·13 Posts 

20080331, 15:07  #9  
Nov 2003
2^{2}·5·373 Posts 
Quote:
divided by the sqrt of the determinant. One could also simply compute the acute angle via a dot product and an ACOS and reject if the angle is too acute. 

20080331, 15:37  #10 
"Ben"
Feb 2007
110101101110_{2} Posts 
I am swimming in water that is too deep for me, but I'll comment:
If q are discarded, then one would need to sieve to higher q in order to collect a fixed number of relations because yield decreases as q increases, correct? This seems like it would reduce the benefit of discarding q's (you are cherrypicking q's, but the q's you are cherrypicking from are not as good overall). Do you have a feel for if the necessarily increased q range will have significant impact: still a net benefit but not quite as much? completely irrelavent? something else? 
20080331, 15:57  #11  
Nov 2003
2^{2}×5×373 Posts 
Quote:
larger 'good' q's are still better than smaller 'bad' ones. How much better is sensitive to factor base size, large prime bounds, and sieve area per q. Quantifying it theoretically seems out of the question, because the difference only shows up in the o(1) term in the exponent. 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
QS Lattice Siever  R.D. Silverman  Factoring  31  20181008 17:17 
Possible optimization for GIMPS  alpertron  Math  3  20120813 16:08 
Size optimization  Sleepy  Msieve  14  20111020 10:27 
NFS Optimization  R.D. Silverman  Factoring  91  20100124 20:48 
ASM Optimization  Cyclamen Persicum  Hardware  4  20040526 07:51 