![]() |
GIMPS double-checking certainty rate
Hi,
I've read this: [url]https://www.mersenne.org/various/math.php[/url] Overall, I'm wondering if GIMPS achieves 100% certainty on the primality test? Does the double-checking step ensures 100% certainty and if so, can you please elaborate a little bit on how it works? Why re-running Lucas-Lehmer ensure 100% certainty? What changes between runs? As side questions, I was wondering : 1) Is the convolution error check so slow, that hypothetically, it could not be used at every step and finish in less than, say, 10 years? 2) Have you tested the all-integer weighted transform which as far as I can tell should be lossless? Hypothetically, could it not be used to perform a primarily test and finish in less than, say, 10 years? Thank you, |
[QUOTE=6pac;478515]What changes between runs?
[/QUOTE] It is called "shift". See [URL="https://www.mersenne.org/various/math.php#double_checking"]here[/URL]. (the link is to the last section, called "double checking", but the page is built in such a way that you can't go below the bottom of the page, that is why it seems that link starts in the middle of the "LL" section) |
[QUOTE=6pac;478515]Overall, I'm wondering if GIMPS achieves 100% certainty on the primality test?[/QUOTE]No it doesn't. It only validates the last 64-bits of the remainder for non-primes. So in theory there can be a 1 in 2^64 chance of an invalid test matching the final residue.
So for there to be a missed prime there would have to be [u]two[/u] bad tests that manage to match on the final 64-bits. By my probably incorrect maths that would be approximately a (1 in 2^64) * (4%^2) chance, where 4% is the average error rate. Primes are a wholly different manner and are checked much more thoroughly, many times on many systems. The chances of a false prime are considerably smaller than having a missed prime. Still not 100%, but approaching very very close to it. |
[QUOTE=retina;478536]Primes are a wholly different manner and are checked much more thoroughly, many times on many systems. The chances of a false prime are considerably smaller than a having missed prime. Still not 100%, but approaching very very close to it.[/QUOTE]Interim residues for primes are also cross checked. Such that the chance of errors is even lower. If every 10,000,000 iterations we compare the residue, that drives down probability of errors causing the a false positive. Also, using different FFT sizes reduces the probability that other errors sneak in.
|
[I]edited[/I]
Is 100% achievable in a practical amount of processing time (say 10 years)? Maybe we've done it before for some big Mi? |
[QUOTE=6pac;478515]What changes between runs?[/QUOTE]
As LaurV pointed out, the shift changes. Also the hardware changes. Different people use different computers. The principal source of error is hardware problems. Sometimes people overclock their machines past the point of reliability, sometimes a bit gets flipped by cosmic rays, or maybe the hardware is just crap to begin with. It takes only one tiny error in one calculation among tens of millions to invalidate the whole result. Also the chips change, and the version of the program changes. Maybe the first LL test was done eight years ago with a Nehalem microarchitecture that used SSE2 and the second LL test was done a few months ago with a Skylake using AVX512. Or maybe the second test used a different program altogether like Mlucas, or CUDALucas and gpuOwL, which use the graphics card instead of the CPU. [QUOTE=6pac;478538]Is 100% achievable in a practical amount of processing time (say 10 years)?[/QUOTE] The potential sources of erroneous results include hardware unreliability, algorithmic implementation flaws, and human malice. Current procedures provide a high degree of confidence, but I don't know what would constitute "100%" certainty that a Mersenne number is composite, short of discovering a factor. |
[QUOTE=GP2;478541] I don't know what would constitute "100%"[/QUOTE]
You can get 100% certainty by computing Lucas-Lehmer using fix-point math. Distinct hardware gives different floating point results (FPU, SIMD, GPU). ALU though is deterministic across same family of hardware (I've first hand experience regarding this on x86-64). Now can you compute Lucas-Lehmer using fix-point math in a practical amount of time, I dont know. |
[QUOTE=6pac;478558]You can get 100% certainty by computing Lucas-Lehmer using fix-point math. Distinct hardware gives different floating point results (FPU, SIMD, GPU). ALU though is deterministic across same family of hardware (I've first hand experience regarding this on x86-64). Now can you compute Lucas-Lehmer using fix-point math in a practical amount of time, I dont know.[/QUOTE]
Assuming no hardware errors, no programming errors, no bit flips in the process, etc. |
[QUOTE=science_man_88;478562]Assuming no hardware errors, no programming errors, no bit flips in the process, etc.[/QUOTE]
I see what you mean. I guess there is two possible views on that. You could be a nazi (for lack of better term) and say "because of say, programming errors, 100% is not doable, you will always have to cross-check the result" (which is correct). Or you could say "we could complete a fix-point math calculation - for example -, cross-check it, and assume we've achieved 100%". Both views are interesting in my opinion. Personally, I would like to know we've achieved the later just because of the challenge. Right now, assuming no hardware, programming, cosmic errors, we know we dont do 100% (or maybe we do). |
[QUOTE=6pac;478566]I see what you mean. I guess there is two possible views on that. You could be a nazi (for lack of better term) and say "because of say, cosmic rays, 100% is not doable". Or you could say "we could complete a fix-point math calculation - for example -, cross-check it because of cosmic rays, and assume we've achieved 100%". Both views are interesting in my opinion. Personally, I would like to know we've achieved the later.[/QUOTE]
The main problem is not many properties , except being prime are forced on the exponents that allow mersenne primes. Also because we can't deal with huge numbers well, its not as simple as starting from a known mersenne prime exponents end point. Until something is proved about what prime exponents, can give an all 0 result in general, the best thing to do is keep checking exponents. |
[QUOTE=science_man_88;478567]The main problem is not many properties , except being prime are forced on the exponents that allow mersenne primes. Also because we can't deal with huge numbers well, its not as simple as starting from a known mersenne prime exponents end point. Until something is proved about what prime exponents, can give an all 0 result in general, the best thing to do is keep checking exponents.[/QUOTE]
An idea (if I understand correctly though) would be to get a candidate with floating-point math and just for the fun, try to get "100% certainty" on it using a second pass. |
[QUOTE=6pac;478568]An idea (if I understand correctly though) would be to get a candidate with floating-point math and just for the fun, try to get "100% certainty" on it using a second pass.[/QUOTE]
If your primary concern is the roundoff error (convolution error) associated with floating-point math, then you could try using a ridiculously large FFT size. It would take considerably longer, but presumably not as long as 10 years, or whatever your hypothetical fixed-point calculation would require. Perhaps there are some theoretical bounds that could be determined for a given exponent using a particular FFT size, such that we could guarantee that the error will always be less than 0.5? Another approach is to use the relatively new Gerbicz PRP tests rather than the LL test. It has a much greater level of built-in error correction, which would make us considerably more confident of the final result even when there is only a single test. |
[QUOTE=GP2;478575]If your primary concern is the roundoff error (convolution error) associated with floating-point math, then you could try using a ridiculously large FFT size.
It would take considerably longer, but presumably not as long as 10 years, or whatever your hypothetical fixed-point calculation would require. Perhaps there are some theoretical bounds that could be determined for a given exponent using a particular FFT size, such that we could guarantee that the error will always be less than 0.5? Another approach is to use the relatively new Gerbicz PRP tests rather than the LL test. It has a much greater level of built-in error correction, which would make us considerably more confident of the final result even when there is only a single test.[/QUOTE]I think we are good enough as far as certainty goes. I personally have no problem accepting 1 in 2^64 as enough proof. If that kind of proof level is worrying to anyone then they have much more pressing concerns with being struck by lightning. |
| All times are UTC. The time now is 16:10. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.