Perhaps the example below helps clarify my observations/questions.
I've run it with varying small primes bounds (but always the same tolerance term, 2*int(log(pmax))
and factor base sizes. In my program the ideal seems to be a little above 2000, giving a number of factorizations very close to, sometimes slightly smaller than, the size of the factor base.
Each polynomial gives rise to 10 plus or minus 3 trials and I believe the success rate is something like 8590% (I've no idea how many candidates I'm missing!). What strikes me is the much larger percentage of matches in the first case, and this is a recurring observation.
N=27656492312230575123821463511979706363619541782727
smoothness bound 45000,
about 2400 primes.
smalles prime sieved on: 37.
multiplier 3:
1793 polynomials
1311 smooths
large prime factorizations 13132
matches 1774
number of relations: 785
multiplier 1:
1575 polynomials
1216 smooths
large prime factorizations: 14943
matches: 1277
number of relations: 170
This is really a different topic, but I don't get duplicate polynomials. I generate the a=q*q coefficients by picking a number at random in a small neighborhood of the ideal size and look for the next larger prime number q
with q&3=3.
I keep a list of the q's and check for duplicates (I've tried hashing, too). For some reason this approach seems to be faster than stepping outwards from the ideal! I'm sure there are better ways, but I haven't seen any discussion of this in the literature.
