20050624, 16:51  #1 
Aug 2004
2^{2}·3·11 Posts 
DJB paper "Factoring into coprimes in essentially linear time"
Hi All,
I just came across this paper from Berstein in "Journal of Algorithms" Here's the link: http://www.sciencedirect.com/science.../sdarticle.pdf Can anyone tell me if this algorithm is practical for use in an NFS siever for doing the final factorisation of the candidate smooth norms? From a quick scan through, the algorithm seems to depend on first calculating the product of all the norms. Does an implementation have to make use of special tricks, like use of FFT for the multiplication, in order to be practical? Chris 
20050624, 19:58  #2  
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
3·13·293 Posts 
Quote:
Dan's algorithm has been kicking around for some time now. It works well if memory is free, or nearly so, and if you have a lot of numbers for which you want the small factors. The latter is certainly the case in NFS. Paul 

20050624, 20:08  #3  
Aug 2004
2^{2}×3×11 Posts 
Quote:
Is it used in any current NFS implementations? Quote:
Chris 

20050625, 01:31  #4  
Tribal Bullet
Oct 2004
5·709 Posts 
Quote:
I get the same feeling about Bernstein's other similar idea, that you can take all of your unfactored relations and find the core set of them with repeated small factors, without actually performing any factorizations. If you need 500 million relations, and have to sift through thousands of times that number, you'd end up having to perform FFTs on a dataset that won't even fit on a hard drive. Somebody tell me if I have the scale wrong here. jasonp Last fiddled with by jasonp on 20050625 at 01:42 

20050625, 09:22  #5  
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
3×13×293 Posts 
Quote:
When I told Dan that his algorithm seemed to be correct but required infeasible amounts of memory, he replied that the product tree, or portions thereof, could reside on disk without changing the asyptotic behaviour. Again, correct but now the proportionality constant grows substantially. The algorithm may yet prove practical but without a working implementation I remain doubtful. Paul 

20050625, 18:44  #6 
∂^{2}ω=0
Sep 2002
Repรบblica de California
2DDB_{16} Posts 
This idea of completely ignoring memory access time when estimating algorithmic complexity seems somewhat bogus to me. Reading and writing memory takes CPU time, so if memory usage winds up being the bottleneck of the algorithm in question and the memory usage is not linear, well, then neither is the algorithm. If I understand Bernstein properly (the memory estimates are buried pretty deep amongst all the neato newfangled acronyms) the resulting complexity estimate for his algorithm would still be subquadratic (specifically n*log n), but not linear as he claims. Am I reading things correctly?

20050625, 19:41  #7  
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
3·13·293 Posts 
Quote:
Assuming memory access time of O(1) then I believe that Dan's claim is true: factoring in essentially linear time. Paul 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Hiding "Advanced > Time" Iterations  Jayder  Information & Answers  1  20161115 16:08 
"Quadratic time factorization" patent  mickfrancis  Factoring  5  20150217 14:27 
"factoring" vs "factorizing"  ixfd64  Factoring  4  20121016 04:07 
Problem / Error using the "Time..." option  petrw1  PrimeNet  1  20110704 17:04 
Exponents to rerelease for firsttime testing: "harmful" errorcodes  GP2  Data  4  20031018 22:08 