Better error correction
With a probably restarted computation the task is to compute base^(2^n) mod N, so here we can have that base is "big". As usual the standard setup, let:
x[t]=base^(2^(t*L)) mod N
y[t]=x[0]*x[1]*..*x[t] mod N
then:
we compute the y[] sequence with y[t]=y[t1]*x[t]
the error check: y[t]=base*y[t1]^(2^L) mod N should hold.
If we have a detected error for t=m, but we have stored in memory (x[t],y[t1],y[t]) earlier residues for 0<t<m then we can find the largest t where we have an error checked residue with a binary search [it can be t=0 if we have no positive t], and continue the work with that stored x[u],y[u] that is passing the error check.
Ofcourse with no failure it gives no speedup but quicker regain when we have error(s), because
you can save some iterations where we're almost absolutely sure that the work was correct.
Assuming there was a single error we'll save on average (at most) half of the residues.
It would be worth to use say 8 (or more) such stored (x[t],y[t1],y[t]) residues if the cpu/gpu memory still allows [with equally spaced t=m/8*i for i=1..8, note that for t=0 we have an error checked residue], probably less if we are still in the P1 1st phase's iterations since that is also a memory hungry job.
Notice that it is really worth because one (strong) error check takes L multiplications so a full binary search takes only extra log2(8)*L=3*L multiplications [for 8 stored triplets], but we can save up to L*m iterations. Storing too many triplets is pointless, even if memory allows that since copying residues takes time.
ps. It is not need to store the intermediate x[t], because for a strong checked point: x[t]=y[t]/y[t1] mod N.
Last fiddled with by R. Gerbicz on 20201101 at 19:41
Reason: trivial typo
