View Single Post
Old 2020-08-04, 02:19   #110
Apr 2020

109 Posts

Originally Posted by VBCurtis View Post
That's odd, and means I likely don't understand what lambda does. I thought it was a finer control on mfb, specifically that mfb0 would be lambda0 * lpb0. But 31 * 1.88 is 58.28, meaning mfb0 of 58 is a tighter restriction and that lambda0 isn't doing anything in this list of settings.
Decided to finally try and figure out what's actually going on...
If I'm understanding the documentation correctly, lambda applies to the *approximation* of the size of the cofactor, based on the estimates of the logs of the norm and the prime factors found in sieving. The cofactor isn't actually calculated unless its approximated log (base 2) is smaller than lambda*lpb; if the cofactor then turns out to be larger than 2^mfb it's thrown out.

The parameter documentation tells us that:
    # In case lambda0 or lambda1 are not given (or have value 0),
    # a default value is computed from mfb0 or mfb1. Since the sieving
    # parameters are optimized without lambda0/lambda1, it is better to
    # leave them unspecified.
A further trawl through the documentation reveals that a lambda of mfb/lpb + 0.1 is used if we leave it unspecified. The "fudge factor" of 0.1 is supposed to account for errors in the log approximation used in sieving, so that cofactors smaller than 2^mfb don't get thrown out accidentally. The fact the correction is as large as 0.1 suggests the log estimate is rather crude, but I guess that's needed to make the sieving fast.

Having lambda*lpb < mfb, as was the case for some of the jobs in this thread and for your parameter files for small numbers, means that we throw out potential relations below our mfb bound. On the other hand, we don't waste much time calculating cofactors that end up being bigger than 2^mfb. It ends up looking like a job with a slightly smaller mfb except with a few relations that break that bound. I trust that this actually did turn out to be the optimal approach for small numbers, but it seems this might not be the case anymore at ~c180.
Perhaps the proportion of CPU-time spent *calculating* the cofactors becomes smaller as N gets bigger, so the wasted time spent calculating cofactors larger than 2^mfb becomes less of an issue? Would be helpful if someone with a better understanding of NFS could weigh in here.

In summary, I don't think lambda should be thought of as fine tuning of mfb; it's more like a quirk that's necessary because of the way the process of sieving works.

I have a c180 (1186...) in polynomial selection. I'll try mfb=59; tweaking the default lambdas can wait.

Last fiddled with by charybdis on 2020-08-04 at 02:20
charybdis is offline   Reply With Quote