mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Factoring (https://www.mersenneforum.org/forumdisplay.php?f=19)
-   -   DJB paper "Factoring into coprimes in essentially linear time" (https://www.mersenneforum.org/showthread.php?t=4246)

Chris Card 2005-06-24 16:51

DJB paper "Factoring into coprimes in essentially linear time"
 
Hi All,

I just came across this paper from Berstein in "Journal of Algorithms"

Here's the link:
[url]http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6WH3-4DPC584-1-1&_cdi=6839&_user=10&_orig=browse&_coverDate=01%2F01%2F2005&_sk=999459998&view=c&wchp=dGLbVtb-zSkWA&md5=798329d20dc438eeb8face2ba1d09c94&ie=/sdarticle.pdf[/url]

Can anyone tell me if this algorithm is practical for use in an NFS siever for doing the final factorisation of the candidate smooth norms?

From a quick scan through, the algorithm seems to depend on first calculating the product of all the norms. Does an implementation have to make use of special tricks, like use of FFT for the multiplication, in order to be practical?

Chris

xilman 2005-06-24 19:58

[QUOTE=Chris Card]From a quick scan through, the algorithm seems to depend on first calculating the product of all the norms. Does an implementation have to make use of special tricks, like use of FFT for the multiplication, in order to be practical?[/QUOTE]

Got it in one.

Dan's algorithm has been kicking around for some time now. It works well if memory is free, or nearly so, and if you have a [b]lot[/b] of numbers for which you want the small factors. The latter is certainly the case in NFS.

Paul

Chris Card 2005-06-24 20:08

[QUOTE=xilman]Got it in one.[/QUOTE]
Damn! that means I'm actually going to have to understand that stuff now ...
Is it used in any current NFS implementations?

[QUOTE=xilman]Dan's algorithm has been kicking around for some time now.[/QUOTE]
Yes, I noticed that the paper was submitted in 1998, but only just got published!

Chris

jasonp 2005-06-25 01:31

[QUOTE=Chris Card]
From a quick scan through, the algorithm seems to depend on first calculating the product of all the norms. Does an implementation have to make use of special tricks, like use of FFT for the multiplication, in order to be practical?
[/QUOTE]
I seriously thought about implementing this, but instrumenting my siever code showed that an enormous pile of sieve values survive to the trial factoring stage only to get weeded out (many thousands of sieve values for every surviving relation). I just have a hard time believing that it would be more efficient to skip the trial factoring (which only takes about 2-4% of the runtime in my implementation once the factorization is big enough) and instead do FFTs on numbers tens of millions of digits in size. Maybe that would change if you looked for three large primes instead of two.

I get the same feeling about Bernstein's other similar idea, that you can take all of your unfactored relations and find the core set of them with repeated small factors, without actually performing any factorizations. If you need 500 million relations, and have to sift through thousands of times that number, you'd end up having to perform FFTs on a dataset that won't even fit on a hard drive.

Somebody tell me if I have the scale wrong here.

jasonp

xilman 2005-06-25 09:22

[QUOTE=jasonp]
I get the same feeling about Bernstein's other similar idea, that you can take all of your unfactored relations and find the core set of them with repeated small factors, without actually performing any factorizations. If you need 500 million relations, and have to sift through thousands of times that number, you'd end up having to perform FFTs on a dataset that won't even fit on a hard drive.

Somebody tell me if I have the scale wrong here.[/QUOTE]

You've got it slightly wrong, I believe. As I understand it, the product tree of small primes should be about the same size as the product of the unfactored relations. However, with a reasonable sized factorbase, the numbers are unlikely to fit in RAM. That was my conclusion when I investigated the algorithm, including a couple of noddy implementations, a few years ago.

When I told Dan that his algorithm seemed to be correct but required infeasible amounts of memory, he replied that the product tree, or portions thereof, could reside on disk without changing the asyptotic behaviour. Again, correct but now the proportionality constant grows substantially.

The algorithm may yet prove practical but without a working implementation I remain doubtful.


Paul

ewmayer 2005-06-25 18:44

This idea of completely ignoring memory access time when estimating algorithmic complexity seems somewhat bogus to me. Reading and writing memory takes CPU time, so if memory usage winds up being the bottleneck of the algorithm in question and the memory usage is not linear, well, then neither is the algorithm. If I understand Bernstein properly (the memory estimates are buried pretty deep amongst all the neat-o newfangled acronyms) the resulting complexity estimate for his algorithm would still be subquadratic (specifically n*log n), but not linear as he claims. Am I reading things correctly?

xilman 2005-06-25 19:41

[QUOTE=ewmayer]... newfangled acronyms) the resulting complexity estimate for his algorithm would still be subquadratic (specifically n*log n), but not linear as he claims. Am I reading things correctly?[/QUOTE]
Yes and no. If memory is slowed down by a constant amount, the asymptotics remain the same. If memory access scales as a function of n then it indeed must be taken into account.

Assuming memory access time of O(1) then I believe that Dan's claim is true: factoring in essentially linear time.


Paul


All times are UTC. The time now is 00:25.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.