- **Factoring**
(*https://www.mersenneforum.org/forumdisplay.php?f=19*)

- - **NFS with 5 and 6 large primes**
(*https://www.mersenneforum.org/showthread.php?t=9691*)

NFS with 5 and 6 large primesI realized last week that the arbitrary-precision library needed by msieve's algebraic square root code could be put to other uses, in particular it can be used to implement Bernstein's batch factoring algorithms to speed up the sieving stage. As a proof-of-concept I modified msieve's line siever to use three rational and/or algebraic large primes, but to defer actually trying to factor bi- and tri-composites until a few hundred thousand of them have accumulated. Once that happens, batch factoring isolates the ~2.5% of relations whose cofactors actually need processing.
As an example, I used a recent C135 factored by Hallstein using GNFS. His run used 28-bit large primes, and my mods used the product of all primes < 2^26 to perform batch factoring. Sieve reports with remaining cofactors smaller than 2^81 get batched and submitted to Bernstein's algorithm, and bi- and tri-composites that do not contain at least one factor below 2^26 are aborted. The results are really encouraging. The current code finds twice as many relations and only takes 1.5x longer to do so, for a net speedup of 25%. This is because the vast majority of tri-composites need no explicit factoring at all, and only 1% of the tri-composites that need factoring actually need to be split into three primes (the input would need all three large primes < 2^26, which is extremely rare). The speedup approaches 50% as the norms increase with larger b values, when it becomes feasible to use three large primes on both sides. The extra time needed seems to be split evenly between the batch factoring and the factoring of a much larger number of sieve reports. This isn't going to make a line siever competitive with a good lattice siever, but the same batch factoring techniques can be used with a lattice siever and could conceivably gain the same kinds of speedups for large jobs. jasonp |

Very Cool!
Is this: [URL]http://cr.yp.to/papers/sf-20000807.pdf[/URL] is what you're referring to? |

[QUOTE=jasonp;119809]I realized last week that the arbitrary-precision library needed by msieve's algebraic square root code could be put to other uses, in particular it can be used to implement Bernstein's batch factoring algorithms to speed up the sieving stage. As a proof-of-concept I modified msieve's line siever to use three rational and/or algebraic large primes, but to defer actually trying to factor bi- and tri-composites until a few hundred thousand of them have accumulated. Once that happens, batch factoring isolates the ~2.5% of relations whose cofactors actually need processing.
As an example, I used a recent C135 factored by Hallstein using GNFS. His run used 28-bit large primes, and my mods used the product of all primes < 2^26 to perform batch factoring. Sieve reports with remaining cofactors smaller than 2^81 get batched and submitted to Bernstein's algorithm, and bi- and tri-composites that do not contain at least one factor below 2^26 are aborted. The results are really encouraging. The current code finds twice as many relations and only takes 1.5x longer to do so, for a net speedup of 25%. This is because the vast majority of tri-composites need no explicit factoring at all, and only 1% of the tri-composites that need factoring actually need to be split into three primes (the input would need all three large primes < 2^26, which is extremely rare). The speedup approaches 50% as the norms increase with larger b values, when it becomes feasible to use three large primes on both sides. The extra time needed seems to be split evenly between the batch factoring and the factoring of a much larger number of sieve reports. This isn't going to make a line siever competitive with a good lattice siever, but the same batch factoring techniques can be used with a lattice siever and could conceivably gain the same kinds of speedups for large jobs. jasonp[/QUOTE] I considered this at one time for my lattice siever. However, I concluded that memory requirements were too great. Furthermore, my code spends so little time splitting the cofactors, that the speed increase offered by batch factoring did not seem worth it. Optimizing something that takes less than 1% of the run-time is generally not productive,. BTW, I split my cofactors with a 'tiny' QS implementation fine tuned for 63-bit cofactors. Extending this to say 93 bits would be easy, and the increase in time would not be great. |

[QUOTE=R.D. Silverman;119896]I considered this at one time for my lattice siever. However, I concluded
that memory requirements were too great. Furthermore, my code spends so little time splitting the cofactors, that the speed increase offered by batch factoring did not seem worth it. Optimizing something that takes less than 1% of the run-time is generally not productive,. BTW, I split my cofactors with a 'tiny' QS implementation fine tuned for 63-bit cofactors. Extending this to say 93 bits would be easy, and the increase in time would not be great.[/QUOTE] When using three large primes, the number of cofactors to split goes up by about 50x, and the majority of those are tri-composites that would require ECM or QS. The tiny QS code that I have can do about 100-200 factorizations of 85-bit numbers per second, and this is 10-20x slower than my SQUFOF code (the SQUFOF is quite fast and the QS isn't that great). So if dealing with composite cofactors takes 1% of the time now then it ends up taking 90% of the time when sieve values can have three large primes (i.e. 1000 x 1% = 1000%). This matches my experience with 3LP QS, dealing with so many larger cofactors is a giant pain. Pehaps the picture is different for very large sieving problems, but I'm somewhat doubtful about that. bsquared, the algorithm is from page 18 of [url]http://cr.yp.to/talks/2004.07.07/slides.pdf[/url]. Bernstein wants to use it to factor entire sieve values instead of just the parts containing large primes, but I believe in incremental changes :) I'm still getting a handle on the memory use needed, you don't need to batch very many relations in order to get most of the asymptotic speedup (100k is plenty). Basically it looks like sieving speed can double if you can spare 100-150MB of memory. Dumping the batched relations to disk is also an option, and the dump files can be combined and moved to a high-memory machine for batch factoring if necessary. However, it's kind of wasteful of disk space when only 2% of what you dump will end up being useful. |

One part of the PhD I'm working on is optimizing ECM, P-1 and some other factoring algorithms (maybe P+1, Pollard rho is most likely useless) for NFS with more than two large primes on one side.
Peter's new idea for the P+/-1 stage 2 looks very attractive for the job as the asymptotic complexity drops from O(d (log d)^2) to O(d log d), d the degree of the polynomial we evaluate, and perhaps more importantly the implied constant drops by rather a lot. I.e. or a c200, B2=10^9 the old code took 4.0 seconds, the new code takes 1.0 second. I'm hopeful that a properly optimized implementation operating on, say, 96 or 128 bit moduli would be quite useful for refactoring. However, at the moment, even the GMP-based implementation in GMP-ECM isn't 100% complete so the small-modulus version will take a while yet. Alex |

All times are UTC. The time now is 08:39. |

Powered by vBulletin® Version 3.8.11

Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.