Quote:
Originally Posted by CRGreathouse
Silverman suggested talking about some real mathematics, maybe the parity problem. I don't know if we have enough expertise here for a good discussion, but I thought I'd give it a go.
Tao has some great expository articles:
https://terrytao.wordpress.com/2007/...sievetheory/
https://terrytao.wordpress.com/2014/...mobstruction/
https://terrytao.wordpress.com/2015/...sievetheory/
https://terrytao.wordpress.com/2014/...boundederror/
https://terrytao.wordpress.com/2016/...mptoticsieve/
Here are some papers showing real results breaking the parity barrier: Friedlander & Iwaniec, HeathBrown, Helfgott, HeathBrown & Moroz, RamarĂ© & Walker, Tao, and Zhang.
It's hard,and there are still no general techniques, but it's doable. Bilinear forms seem to be a recurring theme.

The basic problem, even for weighted sieves is that each time one adds an (additional)
element to the sieve a tiny bit of error creeps in. With enough elements the error
term overtakes the main term. Typically, when sieving integers up to B is that one can
use up to [log(B)]^m for some m elements in the sieve or under some conditions
B^epsilon for small epsilon before the errors become too large. This is known sometimes
as the "fundamental lemma of the sieve".
Example: How many integers in [1,101] are divisible by 3. Answer is trivially 33.
But it is also 33 for [1,99], and [1,100]. We estimate the number in [1,N] as N/3
but this is seen to be a "little bit wrong". If we want to sieve all the primes up to
K without error we need to take B >> 2*3*5*...K ~ exp(K). Thus we only are allowed
to have log(B) primes in the sieve if we want to avoid accumulating errors. When
we bound the error (depending on the weighting scheme) we can take up to log^m (B)
sieve elements.