Quote:
Originally Posted by Youjaes
If the intention is to test the hardware and FFT, wouldn't it be simpler and more efficient to generate p pseudorandom bits, X, and it's conjugate Y = ((2^p  1) Xor X) = (2^p  1)  X, i.e. X + Y = X Or Y = 2^p  1, since (X*X  2) mod (2^p  1) = (Y*Y  2) mod (2^p  1) and compute a 128bit Xor checksum on the two values (Xor 128 bit chunks of data at a time together) and if they don't match, do the two multiplications again so that if the error results are the same, then transmit the RNG seed and the two 128bit checksums because you just found a confirmed error in the math so it's off to see the wizard. If this test were to be performed every 1,000 or 10,000 multiplications during the LL test, then the health of the computing device can be verified independent of the algorithm immediately and you don't have to wait till you are done and not know until someone runs a second LL to find out there is a problem. This way, you don't have to screw around with start values and can always use s0=4 with confidence.

1. If I'm understanding you correctly, your idea is to periodically compute X^2 and (NX)^2 mod N for a random X as a simple hardware check. There are at least two major problems with this. Firstly, your hardware could make errors too rarely to be picked up by your check, but frequently enough to regularly produce incorrect LL residues  say, if it makes an error once in every 100M squarings. Secondly, your check doesn't use any of the actual intermediate values from the LL test, so you could never be totally sure that the test itself contained no errors, even if you ran hundreds of checks for every LL iteration.
2. Using a shift is
not "screwing around with start values". It's
still using s0=4, but everything is shifted, including the 2. The shift is a separate concept from using an alternative starting value such as 10, which would give a completely different sequence of residues. If we run one test on a composite Mersenne with s0=4 and another with s0=10, we won't get matching residues, so this would not be a suitable method for doublechecking (except for primes).
3. As LaurV says, the main intention of the shift is that it will pick up bugs in the FFT implementation: while morally we're performing the same calculations, the numbers that we need to square are bitshifted, so the FFT computations are different. An FFT bug would therefore produce different incorrect residues with different shifts. Hardware errors are much easier to pick up: even without a shift, they would lead to residues that don't match, if they don't get caught by other means first.
4. GIMPS now prefers to use PRP rather than LL for primality tests, apart from doublechecks of LL results. This is because there is a robust errorchecking procedure called the
Gerbicz errorcheck which is almost certain to catch any errors during the test. PRP tests can also be
certified, allowing a quick verification that the work was done correctly; this means that a doublecheck is not needed.