Quote:
Originally Posted by gophne
The task is daunting indeed....under normal circumstances 2030 would be optimistic as well!!! more like 3020! would make more sense.

Daunting to be sure, but I think there is reason for hope. Much progress has been made recently, and a mathematician who has all of the tools in her/his toolbox will be better able to attack the problem.
The Prime Number Theorem showed the ("trivial") estimate that the gap following a prime p is infinitely often at most length log p or so.
Sieve theory was developed to work on problems like the twin prime conjecture, and the Brun sieve (1915) was developed to prove that the sum of the reciprocals of the twin primes converged.
Westzynthius made the first improvement over the trivial bound, with further (slow) progress through the 1930s by Erdős and Rankin.
Linnik developed sieve theory further with the
large sieve in 1941. It uses Fourier analysis and analytic number theory to allow larger sieving sets.
The Selberg sieve was also developed around this time, this one being a combinatorial sieve rather than analytic. Careful choice of sieve weights are essential in this method.
Using the large sieve, Barban, Bomberi, and Vinogradov proved a theorem which gives a Riemann Hypothesistype error bound to primes in arithmetic progressions when the modulus is up to roughly square root size. The ElliottHalberstam conjecture (EH) says that the modulus can be taken almost up to the size of the bounding variable itself. This kind of information would be useful in a proof of the twin prime conjecture.
Chen's theorem (1966) was the first nearproof of the twin prime conjecture, limited by the
parity problem. It proves that there are infinitely many primes p such that p+2 is either prime or semiprime. Ross published a simplification using the Selberg sieve.
Fouvry & Iwaniec (1980) published an improved version of Bomberi and Vinogradov giving some information above the square root barrier, which in principle makes the twin prime problem attackable.
The FriedlanderIwaniec theorem (1997) was essentially the first work to begin to break down the parity barrier, using a great deal of analytic number theory along with combinatorics and Fourier analysis.
Various subsets of Goldston, Graham, Motohashi, Pintz, and Yıldırım (~2005;
two examples) prove many results about small gaps between primes. One example: On EH, there are infinitely many primes p followed by a gap of at most 16. Sieve weights and Selberg diagonalization are tools.
Green & Tao (2006) published probably the last major twin primerelated paper before the Zhang bomb dropped, and it gives an idea of what the state of knowledge was at that time.
Zhang's theorem, at its heart, is an improvement similar to Fouvry & Iwaniec along with the ingredients needed to apply it to the bounded gap problem. It proves that there are infinitely many primes p followed by a gap of at most 70000000. It may be the largest single step we've taken toward the twin prime conjecture.
The polymath project simplified and improved Zhang's proof and reduced the gap to ~2000.
Maynard added some new ingredients to the proof which allowed a further reduction, then followed up with a new paper with the Tao et al. Polymath project to give a bound of 246 (and a bound of 6 on a stronger form of EH).
Ford, Konyagin, Maynard, Pomerance, & Tao (2018) published a paper, improving on other recent ones I'm too tired to survey at the moment, on long gaps between primes. There seems to be a duality between short gaps and long gaps and this team seems to be unraveling the mystery.
So there really is a lot out there, if you have the patience to learn it. I have quite a lot of learning to do myself.
Edit: Here's a very rough chart of the connections between the steps leading us to current progress toward the twin prime conjecture. All mistakes are mine.