Quote:
Originally Posted by akruppa
Is resieving worthwhile for very small primes? It seems to me that those would cause a large number of sieve locations being invesigated while having a low probability of hitting a location marked as smooth (a small prime dividing an integer does not increase the probability of that integer being smooth by much, unlike larger primes). Maybe only resieve by primes between some lower and upper bound?

When the number to be factored is very small (say under 100 bits) the trial factoring of sieve values takes much longer than the sieving. Resieving is a win as long as the the number of sieve values out of a sieve block needing trial factoring is large. You can skip the smaller factor base primes primes entirely during sieving and manually divide by all of them during trial factoring (after choosing a more generous trial factoring cutoff log value). Bob's QS implementation sieves with the powers of these small primes, mine doesn't sieve with them at all. Each approach has its advantages.
When the input becomes larger and the number of sieve values that need trial factoring per sieve block goes down, I've found it more efficient to skip resieving entirely. Instead you can first fill up a set of hashtables with the locations, log values and primes of every sieve update, then use the hashtable both to compute all the logs of values and to speed up trial factoring. This does mean that your sieve blocks have to be comparatively tiny, and that your siever will use a lot more memory than it has to, but it's a good compromise between sieving fast and trial factoring fast.
Once you try factoring ~60 digit composites you'll start seeing runtime differences from these techniques. Below about 50 digits everything will finish essentially instantly.
jasonp