![]() |
Recommendations for cutting down input
For C197_149_70, there's enough over-sieving that I end up with a 45.5M matrix of density 68. So I should probably cut down the input file somewhat.
I'm starting by the trivial steps of removing duplicates and removing lines containing a prime that only occurs once on that side, but obviously msieve does those stages itself - I'm just trying to get the 150GB file to a size where I can have a reasonable number of copies of it on my regrettably finite-sized (500G) fast SSD. Have people already done the experiments of whether it's better to remove the largest primes that only appear twice, or just the largest primes, or to pick seven hundred million lines at random after duplicate removal? |
I don't know sensible ways to go next, but:
[code] Full set 45994203 68.58 (with td=200) p<1E0000000 42380410 88.13 (with td=200) 796421275r / 529369930i p<180000000 42370147 90.17 (with td=134) 646740456r / 440359514i [/code] I'm starting the linear algebra with the last set, it'll take a couple of months. If anyone has good ideas as to other experiments to do with these relations I'd be interested to hear them. |
| All times are UTC. The time now is 01:14. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.