![]() |
|
|
#34 |
|
Dec 2010
Monticello
70316 Posts |
jason, thank you...including for the estimate that a souped-up i7 would take a year to do a 50-60M matrix using block lanczos.
What would you suggest I substitute for Pollard and Diamond on the number theory side?, or should I go to the library and run the photocopier? (I can get to the UVA library, but there's some significant travel involved, especially if I'm going to spend weeks of studying on it in 1 hour segments. I've also got a Kindle, and I do expect to spend a little money) Number theory is taught to grad students from SOMETHING I can get, isn't it? Do we have a good thread explaining QS,SNFS, and GNFS, (ideally stripped of all the algorithmic optimisations) so I understand the matrix being fed into the Lanczos algorithm? (and sorry if I can't spell 100%, that pronunciation indicates I'm a bumpkin). I'm just about to be full of relatively simple questions about the Montgomery paper from properly working through it(like GF(2) = integers modulo 2, equipped with the usual addition and mulitiplication operations, which simplify to boolean operations, right?). Should I ask them here or make a new math thread? Last fiddled with by Christenson on 2011-03-19 at 14:40 |
|
|
|
|
|
#35 |
|
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
2×33×109 Posts |
What do people reckon to Intel's Thunderbolt interface? What performance will 10Gbits(in both directions) produce in comparison with infiniband? Sounds to me like home users with 2-3 quads might be able to use this technology usefully and cost effectively.
|
|
|
|
|
|
#36 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
642310 Posts |
I don't believe that Thunderbolt is switchable, so it would be connecting precisely two computers at 10Gbps; and I don't think anyone's producing an interface which does networking-over-Thunderbolt yet, though it's probably possible.
|
|
|
|
|
|
#37 | |
|
Nov 2003
164448 Posts |
Quote:
|
|
|
|
|
|
|
#38 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
642310 Posts |
I agree that Lidl & Neiderreiter is a good book, but I'm not sure it's a relevant book if your particular interest is factorisation; it's much more about the algebraic structure of GF(p^n) (that is, about polynomials over finite fields), and whilst chapter six would be absolutely essential if you're thinking about the Weidemann algorithm, I'm not sure there's all that much in the Lanczos work for which L&N is handy.
|
|
|
|
|
|
#39 | |
|
Dec 2010
Monticello
5×359 Posts |
Quote:
Actually, I think L&N looks good enough for me to get a copy and work parts of it, even if I need something more...now if I can get work to stop having panics due to total lack of planning, I might have the energy to sit down and order this stuff!
|
|
|
|
|
|
|
#40 |
|
Nov 2010
2·52 Posts |
some more resource of gnfs and algebraic number theory. the best and the most complete book about gnfs i met is "The development of the number field sieve". it is actually a compilation of a little outdated articles. but if you want to get the idea what's behind the gnfs you must read at least part by Buhler, Lenstra and Pomerance. it is the only (almost) complete and rigorous description of the gnfs in general case i know of. although some of the statements there lack prove (especially about the order A) it still has all the information you need to be able to understand what's happening during the algorithm.
as to the algebraic number theory books. i'd recommend reading some by Pohst or Cohen about computation aspects of it (goggle for "computational algebraic number theory"). they give you a sufficient minimum of theoretic properties you'd want to know to be able to program algorithms in algebraic fields, such as how can algebraic numbers and ideals be represented in computers, how to operated with them etc. speaking of lanczos code to be hard to distribute, it's a known fact for ones playing with lanczos-like iterative algorithms in real fields. this is due to a big number of the so-called global communications the algorithm involves. the competitor - wiedemann algorithm - has a minimum of them and can be distributed a way more efficient but has to redo all the matrix multiplications to build the final solution. as it turned out we (in moscow state university) implemented our own version of the parallel block lanczos algorithm (the Montgomery version) in a manner quite similar to what Jason and Greg did. our implementation was less platform optimized but had a pretty nice optimization for global communications which i shared with Jason. the current version of the algorithm (e.g. from msieve) can be proved to speed up the computations by factor of \sqrt(P), P - number of processors in grid. this estimation accounts for both computation and communication times. while all the other known implementations of the parallel block lanczos has almost no speed up in the communication time with P growing. |
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Msieve with MPI block Lanczos | jasonp | Msieve | 104 | 2013-06-17 11:28 |
| Initial vector in Lanczos | Robert Holmes | Miscellaneous Math | 4 | 2010-09-11 01:34 |
| Lanczos error | Andi47 | Msieve | 7 | 2009-01-11 19:33 |
| Msieve Lanczos scalability | Jeff Gilchrist | Msieve | 1 | 2009-01-02 09:32 |
| how would you distribute the $100k prize? | ixfd64 | Lounge | 26 | 2006-01-20 02:33 |