Quote:
Originally Posted by Chris Card
From a quick scan through, the algorithm seems to depend on first calculating the product of all the norms. Does an implementation have to make use of special tricks, like use of FFT for the multiplication, in order to be practical?

I seriously thought about implementing this, but instrumenting my siever code showed that an enormous pile of sieve values survive to the trial factoring stage only to get weeded out (many thousands of sieve values for every surviving relation). I just have a hard time believing that it would be more efficient to skip the trial factoring (which only takes about 24% of the runtime in my implementation once the factorization is big enough) and instead do FFTs on numbers tens of millions of digits in size. Maybe that would change if you looked for three large primes instead of two.
I get the same feeling about Bernstein's other similar idea, that you can take all of your unfactored relations and find the core set of them with repeated small factors, without actually performing any factorizations. If you need 500 million relations, and have to sift through thousands of times that number, you'd end up having to perform FFTs on a dataset that won't even fit on a hard drive.
Somebody tell me if I have the scale wrong here.
jasonp