20201018, 23:35  #1 
Apr 2020
345_{8} Posts 
NFS cofactorization with 3LP
When sieving with 2 large primes, one tends to find that the optimal value of mfb is slightly less than 2*lpb. Presumably this is because a composite cofactor close to 2^(2*lpb) is unlikely to split into two primes that are both smaller than 2^lpb. For example, for lpb=32 the optimum value of mfb with CADONFS appears to be 60.
Similarly, when sieving with 3 large primes, the optimum value of mfb is slightly less than 3*lpb. However, what happens if we get a cofactor that is just below 2^(2*lpb)? Such a cofactor is still unlikely to contribute to a relation: as in the 2LP case, it is probably not the product of two primes both smaller than 2^lpb, and it will also be too small to be the product of three large primes. So again it should make sense to set a double large prime bound smaller than 2^(2*lpb). Neither GGNFS nor CADO appear to have an option to set a double large prime bound separately from mfb when using 3LP. Does anyone here know what is actually done? Is a value computed automatically, perhaps from the mfb given? 
20201018, 23:56  #2 
"Curtis"
Feb 2005
Riverside, CA
1001001111100_{2} Posts 
I've wondered about this also, though my curiosity was about cofactors slightly bigger than 2*mfb but still too small to possibly split into 3 large primes.

20201019, 00:05  #3  
Apr 2020
229 Posts 
Quote:
Code:
nd = mpz_get_d (n); B = (double) scs.lim; kB = B * B; for (klpb = lpb; klpb < s; klpb += lpb, kB *= B) { /* invariant: klpb = k * lpb, kB = B^(k+1) */ if (nd < kB) /* L^k < n < B^(k+1) */ return 0; } 

20201019, 00:33  #4 
Apr 2020
229 Posts 
Should have realised that I could just search in a relations file to find an answer to my question
Here's one from a job with lim1=175M, lpb1=32, mfb1=90: 178218979181,5545:2,b,fb,445,3b21,3c47b,166eeb,b38e5f,c2091937:3,3,3,5,71,89,265,9e3f5,c5a6b,451d0b,1c9c589,45458a5,f085ca3b,fb78b42f That's two large 32bit prime factors on the algebraic side, so the cofactor wasn't far off 2^64. It sure looks like CADO is just using a bound of 2^(2*lpb). It feels like there could be a decent speedup from setting the 2LP bound separately in this situation, and I'd be surprised if noone had thought of this before, so is there some reason why this wouldn't actually work in practice? 
20201019, 02:36  #5 
"Curtis"
Feb 2005
Riverside, CA
2^{2}×7×13^{2} Posts 
Well, standard practice for 2LP side is to use mfb of 2* lpb, or 2 * lpb1. We're the weird ones using very nonstandard settings for our params files. Take a look at the factory params files for, say, C130145 to see what I mean.
So, it would not surprise me that nobody thought to do what you're suggesting. Edit: I don't speak code well enough to answer my own question: Could you just modify that formula above to exclude, say, 64bit cofactors when LP = 32? Just that one change should provide a fair bit of speedup. Last fiddled with by VBCurtis on 20201019 at 02:38 
20201019, 18:59  #6  
Apr 2020
229 Posts 
Quote:
Of course, I wouldn't like having to recompile every time I change the bounds, so if this experiment isn't a complete failure I'll get in contact with the authors. Last fiddled with by charybdis on 20201019 at 19:01 

20201103, 12:30  #7 
Apr 2020
345_{8} Posts 
Bounds of 32/60/90 on the algebraic side are quite clearly worse than 32/64/90, by something like 1015% depending on whether you take matrix size into account or not.
I'll try 32/62/90 on the next job. 
20201119, 20:50  #8 
Apr 2020
229_{10} Posts 
32/62/90 performs fairly similarly to 32/64/90, it's maybe a couple of percent slower but hard to tell for sure. There is a slight reduction in relations needed, but the lower yield means more sieving has to be done at higher Q values where the sieving is slower. There isn't any noticeable improvement in matrix size.
In hindsight, I shouldn't have been expecting the algebraic side to behave too similarly to the rational side: the larger norms mean that more of the relations have two large primes close to the bound on the algebraic side than on the rational side. I could try 32/63/90, but it's hard to imagine it'll be more than a percent or two faster than 32/64/90, so I'll just stick with the defaults from now on. 
20201207, 12:45  #9 
Tribal Bullet
Oct 2004
110111010000_{2} Posts 
Try asking the developers in the cadonfs mailing list, they are very responsive and I know they have thought pretty deeply about cofactorization strategies.
For extra credit: given a rational and algebraic pair of numbers to factorize, determine as fast as possible if it is worth the effort to try factorizing both. 
20201207, 15:39  #10  
Apr 2020
229 Posts 
Quote:
One useful thing that we do seem to have discovered is that it can be optimal to set the mfb bounds asymmetrically (algebraic higher than rational) even when both lpbs are the same; Curtis has run some tests in the c100c120 range that show this to be the case. Last fiddled with by charybdis on 20201207 at 15:40 

20201208, 15:11  #11 
Apr 2020
229 Posts 
I had another idea: turns out you can get a tiny speedup by ignoring cofactors right at the bottom of the 3LP range  say, the bottom 2 bits of it. It looks like the improvement won't be any more than 1%, so I'm not sure if it's worth bothering the developers about this.
