Thread: Parity barrier
View Single Post
Old 2020-02-13, 05:49   #3
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

11101001001002 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
The basic problem, even for weighted sieves is that each time one adds an (additional)
element to the sieve a tiny bit of error creeps in. With enough elements the error
term overtakes the main term. Typically, when sieving integers up to B is that one can
use up to [log(B)]^m for some m elements in the sieve or under some conditions
B^epsilon for small epsilon before the errors become too large. This is known sometimes
as the "fundamental lemma of the sieve".

Example: How many integers in [1,101] are divisible by 3. Answer is trivially 33.
But it is also 33 for [1,99], and [1,100]. We estimate the number in [1,N] as N/3
but this is seen to be a "little bit wrong". If we want to sieve all the primes up to
K without error we need to take B >> 2*3*5*...K ~ exp(K). Thus we only are allowed
to have log(B) primes in the sieve if we want to avoid accumulating errors. When
we bound the error (depending on the weighting scheme) we can take up to log^m (B)
sieve elements.
With regard to the bilinear forms: The successes have come because the range sets
for these forms have "sufficient density". When one considers (say) X^2 + 1, there
are ~sqrt(B). such integers up to B. But if we take a bilinear form such as x^2 + y^4
there are ~B^(3/4) such integers less than B. This is "just enough more"
so that sieve methods can succeed; the range sets are just a little bit denser.

BTW, I have read Halberstam & Richert's "Sieve Methods" a couple of times. I have
it on good authority from an expert (my ex) that it is a great reference, but not a
great textbook to learn from. I found it frustrating to read and understand. I still
can't claim to understand it, but it is a good starting point.
R.D. Silverman is offline   Reply With Quote