View Single Post
Old 2019-12-02, 23:22   #7
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

22·5·373 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
For those curious, but not curious enough to git:

Poly select used some different parameters, notably incr of 110880 (similar to tests Gimarel has run with msieve) and admax (that is, c6 max) of 2e12.
CADO-specific params: P 20 million, nq 1296. I bet they'd have better-yet performance with nq of 7776 and admax around 3e11, for the same search time.
sizeopt-effort was set to 20; I've not seen this set on any previous params file.

sieve params:
factor base bounds of 1.8e9 and 2.1e9.
LP bounds of 36/37, mfb 72 and 111 (exactly 2x and 3x LP, so 3LP on one side)
mfb? Bound for cofactors? i.e. try to factor cofactors only if less than these bounds?

Quote:
lambda values specified at 2.0 and 3.0, respectively
Sieve thresholds? i.e. 2*log(max element in fbase) respectively. 3x?
I presume the rational side is 2 and the algebraic 3?

Quote:
Q from 800 million up.
CADO-specific: ncurves0 of *one hundred*. Typical previously was 25 or so.
ncurves1 of 35, typical previous was 15 or so.
What were B1/B2? I presume 100 curves tried at one B1/B2 then 35 more at a higher
B1/B2?

Quote:
tasks.A = 32; this is a new setting not in my ~Sep'19 git version of CADO. This "A" setting, combined with much higher ncurves, appears to be where the new speed is found.
It would be nice to know the meaning.

Also, what was the sieve area per Q? Was it constant? Or larger for smaller Q?
Did they consider trying smaller Q than 800M? How many total Q? What was
the average yield per Q?
R.D. Silverman is offline   Reply With Quote