View Single Post
Old 2010-04-14, 15:11   #6
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

25·317 Posts
Default

Quote:
Originally Posted by Mini-Geek View Post
I was playing around with group 1, (not reserving it) and I have a very useful (though maybe obvious) tip for anybody planning on doing this: when starting out the sieve, make srsieve quiet by setting -m to the same as -P. Otherwise, when srsieve hits -m's value (default 100K), it slows down tremendously trying to print every one of the millions of factors found.

Also, sr2sieve is impractical for very large k's (e.g. everything except group 1) and large groups of k's (e.g. any of these groups unless you split them up). Even in group 1, you'd definitely want to only spend the half hour to generate the Legendre tables once (run "sr2sieve -c -i sr_63.abcd" once, it'll save it to sr2cache.bin where all other sr2sieve runnings will automatically look). Even with the first 1000 k's, it took 120 MB of RAM to run, so it might not be practical, depending on how it behaves with all 10000 k's and your RAM limits. I'm not sure how good sr2sieve -x (no Legendre lookup) would be.
Agreed on every front here. If doing 10,000 k's at once for k's of this size, you have to use srsieve or sr2sieve without the symbols, i.e. with the -x switch. I believe that sr2sieve has an inherent limitation itself on memory allocation, although I could be wrong. I seem to recall butting up against a memalloc error even when my machine had far more memory than when the error occurred. Regardless, even the first 10,000 k's would likely eat 5-10 GB of memory or more and take days or even weeks to create the symbols. For such a low n-range, it isn't worth it.

If anyone would care to post their timings on srsieve vs. sr2sieve with the -x option, that would be helpful.


Gary
gd_barnes is online now   Reply With Quote