mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Factoring (https://www.mersenneforum.org/forumdisplay.php?f=19)
-   -   Lattice Reduction (https://www.mersenneforum.org/showthread.php?t=4447)

R.D. Silverman 2005-08-03 12:17

Lattice Reduction
 
Hello Everyone,

The following numerical example illustrates the questions I have about the
stopping rule for when to stop doing a Euclidean reduction.

Start with

9876543 0
1001001 1

And get, in succession

867534 -9
1001001 1

867534 -9
133467 10

66732 -69
133467 10

66732 -69
3 148


The stopping rule I use is that as long as the row reduction reduces
the L2 norm of the row being changed, then accept the change. If the reduction increases the L2 norm, do not continue.

By this criterion, the procedure produces the last matrix above.

Note however, that the 2nd to last matrix is much more orthogonal
than the last.

So which is better (for NFS): the smaller one or the more orthogonal one?
Is there a stopping criterion that considers both the coefficient size and
the orthogonality? How does one tradeoff coefficient size and orthogonality?
Or does orthogonality not matter?

It does seem to matter for the following reason: The row transformations
are equivalent to multiplying the matrix by a unitary matrix, i.e. it leaves
the determinant invariant. The matrix represents an affine transform
from a square grid to a parallelogram. Even though the Jacobian of the
transform is invariant, the more skewed the parallelogram is, the fewer
lattice points it will contain..

Ideally, we would like the final 4 coefficients
to be very close to the square root of the original
a11 coefficient, and for the matrix to be nearly orthogonal
(or equivalently, for the condition number to be as small as possible)

Is there a better algorithm than the Euclidean one for achieving this
desired goal? Gram-Schmidt does NOT. It drives one of the rows (or columns)
to be very small, at the expense of the other. It produces a basis
containing the shortest possible vector, and that is not always desirable.
(i.e. it yields highly skewed regions)


:question: :question:


Bob

dleclair 2005-08-03 13:18

Hi Bob,

On a related note only, have you seen "[URL=http://www.ruhr-uni-bochum.de/itsc/tanja/SHARCS/talks/FrankeKleinjung.pdf]Continued Fractions and Lattice Sieving[/URL] " by Franke and Kleinjung?

At a glance, it seems to discuss the techniques they use for finding a reduced basis for a given lattice.

Other papers from the same conference are found here:

[URL=http://www.ruhr-uni-bochum.de/itsc/tanja/SHARCS/]http://www.ruhr-uni-bochum.de/itsc/tanja/SHARCS/[/URL]

-Don

R.D. Silverman 2005-08-03 13:55

[QUOTE=dleclair]Hi Bob,

On a related note only, have you seen "[URL=http://www.ruhr-uni-bochum.de/itsc/tanja/SHARCS/talks/FrankeKleinjung.pdf]Continued Fractions and Lattice Sieving[/URL] " by Franke and Kleinjung?

At a glance, it seems to discuss the techniques they use for finding a reduced basis for a given lattice.

Other papers from the same conference are found here:

[URL=http://www.ruhr-uni-bochum.de/itsc/tanja/SHARCS/]http://www.ruhr-uni-bochum.de/itsc/tanja/SHARCS/[/URL]

-Don[/QUOTE]

Yes, I am only too well aware of this paper. I received a pre-print a long
time ago.


All times are UTC. The time now is 08:35.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.