View Single Post
Old 2005-06-25, 01:31   #4
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

DDD16 Posts
Default

Quote:
Originally Posted by Chris Card
From a quick scan through, the algorithm seems to depend on first calculating the product of all the norms. Does an implementation have to make use of special tricks, like use of FFT for the multiplication, in order to be practical?
I seriously thought about implementing this, but instrumenting my siever code showed that an enormous pile of sieve values survive to the trial factoring stage only to get weeded out (many thousands of sieve values for every surviving relation). I just have a hard time believing that it would be more efficient to skip the trial factoring (which only takes about 2-4% of the runtime in my implementation once the factorization is big enough) and instead do FFTs on numbers tens of millions of digits in size. Maybe that would change if you looked for three large primes instead of two.

I get the same feeling about Bernstein's other similar idea, that you can take all of your unfactored relations and find the core set of them with repeated small factors, without actually performing any factorizations. If you need 500 million relations, and have to sift through thousands of times that number, you'd end up having to perform FFTs on a dataset that won't even fit on a hard drive.

Somebody tell me if I have the scale wrong here.

jasonp

Last fiddled with by jasonp on 2005-06-25 at 01:42
jasonp is offline   Reply With Quote