![]() |
![]() |
#1 |
Aug 2004
7×19 Posts |
![]()
Hi All,
I just came across this paper from Berstein in "Journal of Algorithms" Here's the link: http://www.sciencedirect.com/science.../sdarticle.pdf Can anyone tell me if this algorithm is practical for use in an NFS siever for doing the final factorisation of the candidate smooth norms? From a quick scan through, the algorithm seems to depend on first calculating the product of all the norms. Does an implementation have to make use of special tricks, like use of FFT for the multiplication, in order to be practical? Chris |
![]() |
![]() |
![]() |
#2 | |
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
2D8816 Posts |
![]() Quote:
Dan's algorithm has been kicking around for some time now. It works well if memory is free, or nearly so, and if you have a lot of numbers for which you want the small factors. The latter is certainly the case in NFS. Paul |
|
![]() |
![]() |
![]() |
#3 | ||
Aug 2004
7·19 Posts |
![]() Quote:
Is it used in any current NFS implementations? Quote:
Chris |
||
![]() |
![]() |
![]() |
#4 | |
Tribal Bullet
Oct 2004
32×5×79 Posts |
![]() Quote:
I get the same feeling about Bernstein's other similar idea, that you can take all of your unfactored relations and find the core set of them with repeated small factors, without actually performing any factorizations. If you need 500 million relations, and have to sift through thousands of times that number, you'd end up having to perform FFTs on a dataset that won't even fit on a hard drive. Somebody tell me if I have the scale wrong here. jasonp Last fiddled with by jasonp on 2005-06-25 at 01:42 |
|
![]() |
![]() |
![]() |
#5 | |
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
23·31·47 Posts |
![]() Quote:
When I told Dan that his algorithm seemed to be correct but required infeasible amounts of memory, he replied that the product tree, or portions thereof, could reside on disk without changing the asyptotic behaviour. Again, correct but now the proportionality constant grows substantially. The algorithm may yet prove practical but without a working implementation I remain doubtful. Paul |
|
![]() |
![]() |
![]() |
#6 |
∂2ω=0
Sep 2002
Repรบblica de California
5×2,351 Posts |
![]()
This idea of completely ignoring memory access time when estimating algorithmic complexity seems somewhat bogus to me. Reading and writing memory takes CPU time, so if memory usage winds up being the bottleneck of the algorithm in question and the memory usage is not linear, well, then neither is the algorithm. If I understand Bernstein properly (the memory estimates are buried pretty deep amongst all the neat-o newfangled acronyms) the resulting complexity estimate for his algorithm would still be subquadratic (specifically n*log n), but not linear as he claims. Am I reading things correctly?
|
![]() |
![]() |
![]() |
#7 | |
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
23·31·47 Posts |
![]() Quote:
Assuming memory access time of O(1) then I believe that Dan's claim is true: factoring in essentially linear time. Paul |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Hiding "Advanced > Time" Iterations | Jayder | Information & Answers | 1 | 2016-11-15 16:08 |
"Quadratic time factorization" patent | mickfrancis | Factoring | 5 | 2015-02-17 14:27 |
"factoring" vs "factorizing" | ixfd64 | Factoring | 4 | 2012-10-16 04:07 |
Problem / Error using the "Time..." option | petrw1 | PrimeNet | 1 | 2011-07-04 17:04 |
Exponents to re-release for first-time testing: "harmful" errorcodes | GP2 | Data | 4 | 2003-10-18 22:08 |