![]() |
![]() |
#1 |
"Bob Silverman"
Nov 2003
North of Boston
11101001001002 Posts |
![]()
Hello Everyone,
The following numerical example illustrates the questions I have about the stopping rule for when to stop doing a Euclidean reduction. Start with 9876543 0 1001001 1 And get, in succession 867534 -9 1001001 1 867534 -9 133467 10 66732 -69 133467 10 66732 -69 3 148 The stopping rule I use is that as long as the row reduction reduces the L2 norm of the row being changed, then accept the change. If the reduction increases the L2 norm, do not continue. By this criterion, the procedure produces the last matrix above. Note however, that the 2nd to last matrix is much more orthogonal than the last. So which is better (for NFS): the smaller one or the more orthogonal one? Is there a stopping criterion that considers both the coefficient size and the orthogonality? How does one tradeoff coefficient size and orthogonality? Or does orthogonality not matter? It does seem to matter for the following reason: The row transformations are equivalent to multiplying the matrix by a unitary matrix, i.e. it leaves the determinant invariant. The matrix represents an affine transform from a square grid to a parallelogram. Even though the Jacobian of the transform is invariant, the more skewed the parallelogram is, the fewer lattice points it will contain.. Ideally, we would like the final 4 coefficients to be very close to the square root of the original a11 coefficient, and for the matrix to be nearly orthogonal (or equivalently, for the condition number to be as small as possible) Is there a better algorithm than the Euclidean one for achieving this desired goal? Gram-Schmidt does NOT. It drives one of the rows (or columns) to be very small, at the expense of the other. It produces a basis containing the shortest possible vector, and that is not always desirable. (i.e. it yields highly skewed regions) ![]() ![]() Bob |
![]() |
![]() |
![]() |
#2 |
Mar 2003
79 Posts |
![]()
Hi Bob,
On a related note only, have you seen "Continued Fractions and Lattice Sieving " by Franke and Kleinjung? At a glance, it seems to discuss the techniques they use for finding a reduced basis for a given lattice. Other papers from the same conference are found here: http://www.ruhr-uni-bochum.de/itsc/tanja/SHARCS/ -Don |
![]() |
![]() |
![]() |
#3 | |
"Bob Silverman"
Nov 2003
North of Boston
22×5×373 Posts |
![]() Quote:
time ago. |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
QS Lattice Siever | R.D. Silverman | Factoring | 31 | 2018-10-08 17:17 |
Fast modular reduction for primes < 512 bits? | BenR | Computer Science & Computational Number Theory | 2 | 2016-03-27 00:37 |
Efficient modular reduction with SSE | fivemack | Computer Science & Computational Number Theory | 15 | 2009-02-19 06:32 |
CPU stats reduction | Prime95 | PrimeNet | 3 | 2008-11-17 19:57 |
Lattice Optimization | R.D. Silverman | Factoring | 23 | 2008-04-08 12:29 |