View Single Post
Old 2021-05-19, 19:18   #31
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

10011110011012 Posts
Default

Your comparison of 3.5 hrs per 5M rels shows that any oversieving past 190M is inefficient. This surprises me- I would have expected 195 or 200 to be the "sweet spot", where the sieve time spent would be more than or equal to LA time saved.

But, even 195M vs 190M only saves 2hr 20 min LA time at the cost of ~3.5hr sieve time.

If these were all default density, I wonder whether the 96 or the 85 density is the target number- I don't have msieve logs handy to look it up myself, so I'll edit this message later today to correct myself about which of the densities from your logs is the one msieve controls directly. If default is 96 at this size, I doubt TD is going to do much; try TD 100 or 104 or 110 on the 195M relations set to see if you can save another hour of LA time.

Your data also helps someone like Ed, who uses a large farm of ~20 machines to sieve, but just one to LA. Your data at 190.....220 can help him choose how much to sieve.

I think I'll make the params file 190M target; as you say, filtering more than once isn't a big deal if that turns out to not be enough for a C161 or C162. I'll add a note that recommends 200M if sieving on multiple machines.

What CPU architecture is used for these tests? Older CPUs take relatively longer to do the LA section, so might prove to be 3.5hr faster at 195M because the entire LA portion is taking 50% longer. Sieving is less sensitive to CPU architecture than LA. My personal experience is with sandy bridge, ivy bridge, haswell generations of Xeon, all rather old by now.
VBCurtis is offline   Reply With Quote