![]() |
can any one help me
hi friends,
filtering the sieve matrix is on of the important steps. can any one please inform me acutal algorithm used in GGNFS or any recent literature available regarding the same. I know only one by pomerance-smith and S. cavallar. |
[QUOTE=shaileshsp29]hi friends,
filtering the sieve matrix is on of the important steps. can any one please inform me acutal algorithm used in GGNFS or any recent literature available regarding the same. I know only one by pomerance-smith and S. cavallar.[/QUOTE] The CWI GNFS suite includes the clique algorithm implemented by S. Cavallar. I have also implemented filtering code (15 years ago!) based upon the intelligent Gaussian elimination ideas of A. Odlyzko. It was written in Fortran (!!), and is *not* competitive with the CWI code in terms of the quality of the final matrix. I use the CWI code. While it has a few quirks and bugs, it produces excellent results. (but it needs quite a bit of manual coaxing to work well) If you do a web search on Stephania's name, I am sure her paper will turn up: Strategies in Filtering in the Number Field Sieve CWI Report MAS-R0012 |
[QUOTE=shaileshsp29]can any one please inform me acutal algorithm used in GGNFS[/QUOTE]
GGNFS uses simple greedy algortihm. It produces fine matrices for small size factorizations, but is not suitable for large-scale ones. |
To R.D. SIlverman
Thanks for your quick response. As mentioned in my pos,t i am aware of S. Cavallar's Approch and also Pomerance and Smiths' approach which is a modification over the Odlyzko's structured gaussian elimination. But S. Cavalla's thesis is atleast 5 yrs old and Pomerance and smith- odlyzko's are 15 yr old. I was wondering if some paper has been published after 2000. |
[QUOTE=shaileshsp29]To R.D. SIlverman
Thanks for your quick response. As mentioned in my pos,t i am aware of S. Cavallar's Approch and also Pomerance and Smiths' approach which is a modification over the Odlyzko's structured gaussian elimination. But S. Cavalla's thesis is atleast 5 yrs old and Pomerance and smith- odlyzko's are 15 yr old. I was wondering if some paper has been published after 2000.[/QUOTE] AFAIK, No. |
I believe that, the filtering was done on single high end machine (Plz correct me if this is wrong). But in future the (sieve output)matrix size is expected to go beyond 20Gb. In that case, a single high end machine wont suffice. And we can not use to many machines as well. The problem here is that the communication will turn out to be bottle-neck atleast in Pomerance-smith and S. Cavallar approach on multiple machines!
|
[QUOTE=shaileshsp29]I believe that, the filtering was done on single high end machine (Plz correct me if this is wrong). But in future the (sieve output)matrix size is expected to go beyond 20Gb. In that case, a single high end machine wont suffice. And we can not use to many machines as well. The problem here is that the communication will turn out to be bottle-neck atleast in Pomerance-smith and S. Cavallar approach on multiple machines![/QUOTE]
CPU time really isn't a problem in filtering. The problem is memory. The CWI code is not very memory efficient. (but it could be with some work; I don't have the time) A single 64-bit processor machine with a lot of memory (16 to 32 Gbytes) will suffice for even the largest matrices in the near future. I do not see multiple processors being required for filtering for quite some time. |
[QUOTE=R.D. Silverman]CPU time really isn't a problem in filtering. The problem is memory. The CWI code is not very memory efficient. (but it could be with some work;
I don't have the time) A single 64-bit processor machine with a lot of memory (16 to 32 Gbytes) will suffice for even the largest matrices in the near future. I do not see multiple processors being required for filtering for quite some time.[/QUOTE]Don Leclair and I worked a little bit to reduce the memory requirements of the CWI filtering process. Our concern was its egregious demands for duplicate and singleton removal, especially singleton removal. Don wrote a duplicate remover; I wrote a quick and dirty singleton remover. My code certainly needs more work because at present it looks only at primes and not prime ideals. Nonetheless, it does a useful job. Richard Wackerbarth and I have brainstormed a few ideas about parallel filtering, but we've not tried implementing any of them yet. Given our excessive workloads at the moment, I doubt we'll get much further in the near future. Paul |
[QUOTE=shaileshsp29]To R.D. SIlverman
Thanks for your quick response. As mentioned in my pos,t i am aware of S. Cavallar's Approch and also Pomerance and Smiths' approach which is a modification over the Odlyzko's structured gaussian elimination. But S. Cavalla's thesis is atleast 5 yrs old and Pomerance and smith- odlyzko's are 15 yr old. I was wondering if some paper has been published after 2000.[/QUOTE] There was another recent paper on sparse gauss elimination - it's been a while and I can't find it online now - that mentioned Cavallar's work in passing. Instead of ordinary NFS filtering they suggested gauss elimination with approximate Markowitz pivoting. For the record, I thought up the same method independently while building an early version of what would become msieve, and it doesn't work very well at all. My understanding of Cavallar's algorithm is that it actually [I]is[/I] gauss elimination, except that the clique processing part of it only figures into how the initial matrix is built. I'm experimenting with using gauss elimination (after the clique phase) and a variant of full Markowitz pivoting, which can be efficiently implemented using a discrete priority queue. Both Odlyzko's method and Cavallar's merge phase sound like approximations to that, and I'm hoping that simpler code which doesn't approximate can produce a decent matrix. It seems to work well, though obviously I haven't thrown big problems at it. The (undocumented) source code is in the current version of msieve. jasonp PS: Haven't read it carefully, but for block lanczos there's [url]http://www.math.uu.nl/people/bisselin/flesch.pdf[/url] |
thanks Jasonp
|
One more query,
After removing all the singletons, we need to throw out excess rows if they are more than the k extra realtions we intend to keep. There are two stratagies 1. Pomerance and Smith throw out the heavist realtions 2. S. Cavallar builds cliques of the reations that are 2-way mergable. weighs them with some heuristic and throws out the cliques with suitable priority. My question is " is the second approach always better than the first one or its just an alternate way?" . Has any one comapred these two experimentally? |
| All times are UTC. The time now is 15:38. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.