View Single Post
Old 2021-05-17, 12:14   #22
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

3×19×89 Posts
Default

Having more available relations allows filtering to work harder, generating a smaller matrix. Setting target density higher explicitly tells filtering to work harder, but even holding target density fixed you'll get a smaller matrix from more relations.

The catch is that it usually takes more time to gather those extra relations than one saves in LA phase. There's a productive amount of oversieving- when one is right at the cusp of building a matrix, a small number of extra relations has a fairly strong effect on matrix size, but diminishing returns sets in rather quickly.

My previous GGNFS/msieve jobs around C165 have had matrices around 9M in size, so this job was rather strongly oversieved. We should cut rels_wanted to 210M for this file and see what matrix comes out.

If you are interested in seeing the effect of the extra relations, zcat all your 220M relations out to a single file, and run msieve's filtering on the file to see what size matrix comes out. Then, restrict the number of relations msieve uses (via a filtering flag, see the -h option list for msieve) to 215, 210, 205 and let us know what size matrices pop out. I suggest a TD of 100 or so for msieve, but you might enjoy trying 100/110/120 on the full dataset to see how target_density affects matrix size.

tasks.qmin can be changed down to 10M. I target a qmin-to-qmax ratio of 7 to 8; your final-Q of 66.5M suggests the chosen qmin of 17M was a little high. Smaller Q sieve faster but yield more duplicate relations, so changing tasks.qmin to 10M should make jobs run a little bit faster because 10-17M will produce more relations (and faster) than 59-66M. Since this job ran faster than expected (based on your ~155 digit experience) and Q is smaller than I expected, you may have found a lucky poly for this job.

Thanks for your data report!
VBCurtis is online now   Reply With Quote