The following is a bug in calc_exp

Code:

if (len >= 50) {
giant x;
calc_exp (pm1data, k, b, n, c, g, B1, p, lower, lower + (len >> 1));
x = allocgiant (len);
calc_exp (pm1data, k, b, n, c, x, B1, p, lower + (len >> 1), upper);
mulg (x, g);
free (x);
return;
}
**itog (2*n, g);**
for ( ; *p <= B1 && (unsigned long) g->sign < len; *p = sieve (pm1data->sieve_info)) {
uint64_t val, max;
val = *p;
max = B1 / *p;
while (val <= max) val *= *p;
ullmulg (val, g);
}

g is initialized to 2*n. The problem is that, this is done for every (recursive) tail call of calc_exp, instead of just the first one, so we end up with B/50 (or more) multiples of 2*n. This just makes stage 0 longer than needed (probably on the order of 5% or so).

What is needed:

Code:

if(lower==0)
itog (2*n, g);
else
setone(g);

**Now, the feature request:**
Make stage 0 bigger. Right now it is a paltry 1M (

stage_0_limit = (pm1data.B > 1000000) ? 1000000 : pm1data.B;), which is probably fine for regular GIMPS work, but at the mid-to-low end, we routinely deal with much bigger B1

So instead, make it as big as possible (100m is not too bad, since we're looking at < 20MB memory use). If calc_exp takes lot of time to do the actual calculation, make it as big as can be reasonably done by a modern processor in, say 60 seconds.