Thread: GIMPS progress
View Single Post
Old 2018-12-16, 17:03   #11
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

11001010011002 Posts
Default GIMPS minor milestone progress versus year

The million-value progression of first test and verification, deltas, and ratios are tabulated by calendar year. The difference (lag) between first test and verify is growing, but the ratio is remarkably stable since 2010.

First test progress averaged from 2012-2018, 6 million annually.
Verify progress averaged from 2012-2018, 3.3 million annually.

A rough linear extrapolation from those figures (always an iffy proposition over long time frames; million-multiple milestones) made in late December 2018 is:
year test verify
2019 87M 49M (matched)
2020 94M 52M (exceeded; 100M reached 2020-12-04 and 54M 2020-12-04)
2021 100M 56M (exceeded; 107M reached 2021-12-29; 59M 2021-12-22)
2022 106M 59M (exceeded; 108M reached 2022-02-28; 60M 2022-04-05)
2023 112M 62M
2024 118M 66M
2025 124M 69M
2026 130M 72M
2027 136M 76M
2028 142M 79M
2029 148M 82M
2030 154M 86M
2040 214M 119M DC becomes largely moot with widespread adoption of PRP-proof and completing LL DC backlog; PRP DC backlog is smaller so will likely complete much quicker
2050 274M 152M Koomey's law projection for end of increasing computing efficiency
2060 334M 185M
2070 394M 218M
2080 454M 251M
2090 514M 284M
2100 574M 317M
2110 634M 350M
2120 694M 383M
2130 754M 416M
2140 814M 449M
2150 874M 482M
2160 934M 515M
2170 994M 548M
2171 1000M 551M current mersenne.org limit reached by first tests. (mprime, prime95 ok on such exponents now)
2307 1816M 1000M current mersenne.org limit reached by verifications (mfaktx, mlucas, gpuowl ok on such exponents now)
3671 10000M 4501M 10G first tests
5034 18178M 10000M 10G verified

(assumes PRP does not dramatically change verification workload)

2171-2018 = 153. About six Mersenne primes are expected in that span. That implies future discoveries once in 153/6 =~25 years on average.

The above extrapolation contains an implicit assumption that while the exponents become much more computationally intensive over time, evolutionary increases in computing power, software improvements, and historical increases in participant numbers will offset that to an extent that the rate of advance on exponent value annually is about constant. While computing a primality test for a 10 times larger exponent is more than 100 times harder, ten years of development provides significantly faster hardware. The projection therefore may be quite pessimistic. On the other hand, shrinking integrated circuit feature size as a means of continuing Moore's law can not continue indefinitely. (Size of atoms and onset of quantum effects limit feature shrink; heat dissipation limits clock rates; practical considerations limit die size, die count, and system count.) The projected schedule above is also at odds with the estimated schedule for approaching the limit imposed by the Landauer's principle and Koomey's law for irreversible computing.

The long time spans in the table above imply certain assumptions allowing the project to continue, including personnel succession plans, and economic and societal stability sufficient for effort, electrical power, computing equipment manufacture, and internet communications to continue.

A second tabulation of GIMPS historical minor milestones made more recently is also attached.

Note, the projection predates the following developments that may each accelerate progress somewhat:
The introduction of the Radeon VII GPU
Dramatic performance improvements in the Gpuowl fft code
Ben Delo first primality testing throughput
Dmbeeson LL DC throughput
PRP DC reduction in effort by ~99%, by introduction of PRP proof
Ending of LL first test assignments from the PrimeNet server to encourage faster transition to PRP/GEC/proof
P-1 reduction in cost possible through the approaches in https://mersenneforum.org/showthread.php?t=25799, https://mersenneforum.org/showthread.php?t=27180, https://mersenneforum.org/showthread.php?t=27366
Reduction in TF and P-1 bounds in response to the near elimination of DC effort by PRP with valid proof generation

Since the work to primality test a single exponent scales as ~p2.1 for current wavefront and code, a sudden doubling in computing power is absorbed easily. The doubled rate of primality testing at p declines to same-rate at ~1.39p. (2(1/2.1)~1.391) Similarly, for tenfold increase in computing power, 10(1/2.1) ~2.99.

But at 2x or 10x greater power, and consequently higher exponent, there are more exponents to be tested, between exponents 10% apart, or between successive Mersenne primes, on the average.

The usual reply to this is hope placed in quantum computing. Assuming that the technical issues and theoretical doubts can be overcome, a 3,000,000 to 1 speedup would enable ~1200x higher exponent, so faster progression through the number line to p~1012. Such equipment is likely to be rare and cost prohibitive for years to come. Storage would be an issue.


Just for fun, projections to completions of DC (, TC, etc as required) up to the levels of Mp#48, Mp#49*, Mp#50*, and Mp#51* have been added to the second attachment. These are estimates. (Estimate: a value we know is wrong, that is used anyway, with caution, when it's all we have.)


Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1
Attached Files
File Type: pdf milestones.pdf (57.7 KB, 14 views)
File Type: pdf gimps progress and rate.pdf (60.9 KB, 13 views)

Last fiddled with by kriesel on 2022-04-05 at 13:38 Reason: updated for verify progress, added lead vs forecast by year
kriesel is online now