![]() |
[QUOTE=R.D. Silverman;234085]I would first need to write a paper on the algorithm. What I am discussing
is totally different from what you do. The technique I propose here would sieve over lattice points where the norm of q(x) is a priori divisible by the special_q, making it much more likely to be smooth with respect to the factor base. It could easily add 20 to 30 digits in the same amount of time.[/QUOTE] Has this idea been forgotten, or did it turn out to be impractical? |
More likely nobody has had time to think about it.
The link to Landquist's paper doesn't seem to work anymore (citeseer has it [url="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.120.1172&rep=rep1&type=pdf"]here[/url]), but I've toyed with some ideas. The asymptotic size of the sieve values that would appear using the algorithm described there are larger than what ordinary QS would manage, but you can say the same thing about MPQS and it's an order of magnitude faster than basic QS. First of all, Landquist's equations create a degree-4 polynomial with a leading coefficient of 1, and choosing some other polynomial with a larger leading coefficient would probably make the others smaller. Further, there's no thought given to a GNFS-like polynomial selection process that selects skewed polynomials with much-better-than-random root scores, and that could push the asymptotic size of sieve values down into the range where a lattice sieve can achieve good yield for a little while. Finally, there's no penalty for switching to a different degree-4 polynomial every so often, meaning that sieving can always use the smallest special-q for maximum relation yield. Would all those tweaks combine to make a QS variant that's better than SIQS for large problems? I have no idea, the only thing I'm sure of is that it would take a lot of concentrated work to flesh out the algorithm to the point where even paper estimates of its speed can be made. |
[QUOTE=jasonp;274706]The link to Landquist's paper doesn't seem to work anymore (citeseer has it [URL="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.120.1172&rep=rep1&type=pdf"]here[/URL]),[/QUOTE]
That's not what I was asking about. The idea that I quoted was to use the same old quadratic polynomials which have been used for the last three decades, but only evaluate them at points where the values are known to be multiples of some large prime. It should be easy to adapt existing QS code to do this (at least easier than implementing quartic polynomials). |
[QUOTE=Random Poster;274448]Has this idea been forgotten, or did it turn out to be impractical?[/QUOTE]
Neither. It's just not worth pursuing IMO. It may speed QS but won't be competitive with NFS for larger numbers. |
[QUOTE=Random Poster;274904]That's not what I was asking about. The idea that I quoted was to use the same old quadratic polynomials which have been used for the last three decades, but only evaluate them at points where the values are known to be multiples of some large prime. It should be easy to adapt existing QS code to do this (at least easier than implementing quartic polynomials).[/QUOTE]
There is a pervasive misconception about how QS works, in that there is only ever one 'QS polynomial'. What we call 'multiple polynomials' is actually a formalism for evaluating the one QS polynomial only on the locations described by an arithmetic progression, and that progression is chosen so that all the points are divisible by a large prime (or a group of smaller primes, more commonly). So you won't buy anything new taking an ordinary MPQS implementation and sampling it at points divisible by more primes; the MP in MPQS already does the job of the more primes. Plus, there already is a very early implementation of QS that uses 'special-Q' in one dimension, instead of what we call 'multiple polynomials'. It was invented by Davis and Holdridge in the mid-1980s...we don't use it today because MPQS makes sieve values a little smaller on average. To get special-Q in two dimensions would require recasting the QS polynomial as a polynomial in two variables, as Bob points out. |
[QUOTE=R.D. Silverman;274907]Neither. It's just not worth pursuing IMO. It may speed QS but won't
be competitive with NFS for larger numbers.[/QUOTE] You don't think speeding QS is a worthwhile goal? I think that size of number is pretty important. |
I'ld like to give this idea a go, but there is one point I do not understand yet.
Surely we need congruences X^2 == q(x,y) (mod n), and the r.h.s. are the smooth q(x,y) found by sieving / trial division. But how do we get the X^2? I assume this has something to do with the condition on the discriminant and properties of binary quadratic forms ? |
Common guys, lend me a hand please. I'm just a computer scientist and far too old to learn the theory of quadratic forms. In return I will write you a nice piece of free software :smile:
Let's repeat: [QUOTE=R.D. Silverman;234097] Construct a polynomial q(x,y) = Ax^2 + 2Bxy + Cy^2 whose discriminant is a multiple of N. Set x = 1 (say) and solve q(x,y) = 0 mod q. There will be two roots, r1, r2. Find reduced lattices for [q r1][0 1] and [q r2][0 1]. This yields two sets of vectors V1, V2, and W1, W2. Now lattice sieve as in NFS. This will give a whole slew of points (e,f) such that (x,y) = eV1 + f V2 will have q(x,y) divisible by q. We now hope that q(x,y)/q is smooth with respect to the factor base. Repeat for the W1, W2 vectors. Now try a new special q. REPEAT.[/QUOTE] Actually now I have 2 questions: 1. How to construct a binary quadratic form q(x,y) = Ax^2 + 2Bxy + Cy^2 with discriminant D = (2*B)^2 - 4AC = 4*(B^2 - AC) = 4kN for some natural number k. 2. Obviously q(x,y) is one half of the congruence. Wherefrom do we get the square X^2 such that X^2 == q(x,y) (mod N) ? |
One way is as Landquist describes in section 2.2 of his paper: express N in some base m close to N^1/3, then square that expression and collect coefficients by power of m. Then parametrize those coefficients in terms of two other variables u,v with gcd(u,v)=1, leading to a monic polynomial of degree 4. This is the step I don't quite follow, in that the choice of parametrization looks really arbitrary. Nevertheless, with a good parametrization you can lattice sieve over this polynomial, finding (u,v) that factor into small primes. That's the RHS of the congruence, the LHS plugs the same (u,v) into the decomposition of N to get what you call X.
This is analogous to the polynomial selection phase of the number field sieve. The big, maybe crucial difference is that NFS chooses two polynomials which have to be sieved simultaneously, whereas this method sieves over only one polynomial. Here there is also no penalty for choosing a different m and sieving over that, the relations you find for different choices of m all can be combined in the postprocessing phase. Finally, the bad news is that the average size of the quartic polynomial is a lot bigger than N^1/2 unless N has a special form where the decomposition into powers of m has unusually small coefficients. That would probably make the sieving unusably slow compared to SIQS, but maybe not. |
Is this not a formulation of 6.1.7 Zhang's Special Quadratic Sieve in C&P?
[url]http://thales.doa.fmph.uniba.sk/macaj/skola/teoriapoli/primes.pdf[/url] |
[QUOTE=jasonp;481861]One way is as Landquist describes in section 2.2 of his paper: express N in some base m close to N^1/3, then square that expression and collect coefficients by power of m. Then parametrize those coefficients in terms of two other variables u,v with gcd(u,v)=1, leading to a monic polynomial of degree 4. ... [/QUOTE]
Thanks Jason. I looked into that paper but didn't consider it much because I am quite convinced that RDS meant what he wrote, say that it could be done with a bivariate polynomial of degree 2, q(x,y) = Ax^2 + 2Bxy + Cy^2. Edit: Or do we finally get a degree 2 polynomial from Zhang's method? I need to have a closer look at Zhangs approach... @henry: Yes, thats about it. |
| All times are UTC. The time now is 23:22. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.