mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Prime Gap Searches (https://www.mersenneforum.org/forumdisplay.php?f=131)
-   -   Gaps between non-consecutive primes (https://www.mersenneforum.org/showthread.php?t=27301)

robert44444uk 2022-01-11 16:38

[QUOTE=robert44444uk;597644]I'm trying to understand the approach.

I've found a 1886-tuplet pattern width 15898 from the internet,[URL="https://math.mit.edu/~primegaps/tuples/admissible_1886_15898.txt"]https://math.mit.edu/~primegaps/tuples/admissible_1886_15898.txt[/URL] so is the idea to get a Chinese Remainder (C) based on mods of primes <400, referenced the start prime of the large gap (P), and then to prp from P+n*C to P+n*C+15900, n integer? Or is there further sieving to do? Are the Chinese mods gotten by a greedy algorithm?

Is such a large Chinese potentially inferior to a much smaller Chinese (c) based around say 1000-tuplet where, if the prime count was high after testing, then it could be tested over the whole range. I'm thinking this trades off the greater chance of primes with ranges close to P, i.e. at P+c*n against the low chance at P+C*n[/QUOTE]

so if I have done this right, the CRT offset is 49268213492433141497814341275312197605721177068674522156228345708919204704299688530737031645921153516711274856551464412837807539813310218115111791871010488468153

This is based on the mods of primes to 400 that never produce 0mod(the prime) for all values in the admissible sets. So 1mod2, 1mod3, 3mod5, 4mod7...

this is approx. 5e160, compared to the prime at the start of the gap of 15900, which is 2e174, so it looks fine to play around with.

mart_r 2022-01-11 22:52

I get the same CRT offset with the pattern you linked to, so that's correct.
For my result, I didn't bother too much about sieving and just tested n*p#+c+x for primes, incrementing n when not enough primes were found above a customized threshold for x; something should be gained by applying an appropriate sieving technique. Up to 479#, there's only one open residue class for each prime, so it should merely be checked that not too many potential coprimes are cancelled out by the sieve.

Just for fun, here are offsets for some larger p#:
[CODE]n*401#+
39513451711353368972101707142676951932015103896038867201264946985487496815918734804213619530781605639321227211666671723990836697616026451458080515468952387537444673

n*409#+
5300963833569209940057949054368721853533208296343484993741509551411018530966721606193800807153951251943913507869529328702625642328421257335999756718592000205492358083

n*419#+
906110212932116653575732557665488316402090982314903912519073620662591471401157669028755216511381275347162712814920340507230752131956051717149856040611426373130703347023
[/CODE]

robert44444uk 2022-01-12 09:04

I'm doing something wrong I think, although I am not sure (maybe mart_r could check)

Average number of primes in a range of x=15900 integers from a = 3483347771*409#/30 is = 15900/ln(a) or approx 39.65, given an average gap of 401.

Average found number primes for n from 0 to 100 in n*p#+c+x, is 40.01 with a max prime count of 54 at n=93. c offset: 49268....

I am surprised to see such a small average pickup it is well within the bounds of statistics to be zero effect.

if I am doing this right I am not sure the method pays off.

robert44444uk 2022-01-12 12:27

[QUOTE=robert44444uk;597739]I'm doing something wrong I think, although I am not sure (maybe mart_r could check)

Average number of primes in a range of x=15900 integers from a = 3483347771*409#/30 is = 15900/ln(a) or approx 39.65, given an average gap of 401.

Average found number primes for n from 0 to 100 in n*p#+c+x, is 40.01 with a max prime count of 54 at n=93. c offset: 49268....

I am surprised to see such a small average pickup it is well within the bounds of statistics to be zero effect.

if I am doing this right I am not sure the method pays off.[/QUOTE]

I was doing this very wrong, silly me.

I think I was wrong to start at the deficient primorial 409#/30, I should have started with the primorial 409#, with a lower multiplier, in this case 3483347771/30 = 116111593 rounded up. The I don't multiply the offset c, I add one each time to n. My results for the first 100 n above 116111593 shows an average of 48.62 with a maximum of 63. That's more like it!

robert44444uk 2022-01-12 16:41

[QUOTE=robert44444uk;597555]I already did a bit of work at k=1000 but I might concentrate at k=200 and 500 and see where that goes[/QUOTE]

500 results - tested to approx. 1.1e12:

Largest gap: 16690 Average merit: 1.229114171 First prime: 622973626447
Smallest gap: 10306 Average merit: 0.795722241 First prime: 177726413581

200 results: tested to approx. 1.39e12

Largest: 7338 Average merit: 1.414200628 First prime:185067242119
Smallest: 3646 Average merit: 0.694714106 First prime: 249072607711

100 results: tested to approx. 2.4e12

Largest: 4540 Average merit: 1.642701445 First prime: 1006401165853
Smallest: 1640 Average merit: 0.580014238 First prime: 1904361666929

robert44444uk 2022-01-14 09:44

[QUOTE=robert44444uk;597747]I was doing this very wrong, silly me.

I think I was wrong to start at the deficient primorial 409#/30, I should have started with the primorial 409#, with a lower multiplier, in this case 3483347771/30 = 116111593 rounded up. The I don't multiply the offset c, I add one each time to n. My results for the first 100 n above 116111593 shows an average of 48.62 with a maximum of 63. That's more like it![/QUOTE]

After 20 hours of processing I'm afraid that I can beat 87 primes in a range of 15900 - I'll continue the search though
In n*p#+c+x, where p = 409, c = 492682..., x from 0 to 15900, then 87 primes are at n =
117575956
118482688

robert44444uk 2022-01-14 10:22

[QUOTE=robert44444uk;597768]

100 results: tested to approx. 2.4e12

Largest: 4540 Average merit: 1.642701445 First prime: 1006401165853
Smallest: 1640 Average merit: 0.580014238 First prime: 1904361666929[/QUOTE]


As an aid to the factorials and offsets approach, it is relatively simple to show that there is no range of 100 primes p1..p100 at relatively small p1 where p1 is larger than any gap smaller than p1.

In the exhaustive check of gaps between 101 primes ( a proxy for 100) highlighted with p1 <2.4e12, no range has an average merit of < 0.58, which equates to requiring a gap, where p1 is less than 2.4e12 whose merit is > 58. As the largest merit ever found is not even 42, there is the basis for the proof.

mart_r 2022-01-14 21:25

[QUOTE=robert44444uk;597925]After 20 hours of processing I'm afraid that I can beat 87 primes in a range of 15900 - I'll continue the search though
In n*p#+c+x, where p = 409, c = 492682..., x from 0 to 15900, then 87 primes are at n =
117575956
118482688[/QUOTE]


It took me a couple of days, so I do think you have a chance to beat me at my own game :smile:
I was also trying to squeeze even more primes into an interval of 8348, about two years ago, but a couple more days of searching and I didn't get past about 94 or 95 primes.

robert44444uk 2022-01-15 11:35

A slight improvement in the prime count for n in n*p#+c+x, where p = 409, c = 492682..., x from 0 to 15900

123733011 91
120673847 88
121848887 88

In terms of sieves, I found it was useful to break down the range x into four even parts and set cumulative targets for each range. I found at least 8 values at 50 or more primes at half way with a top value of 53.

I do feel that it would be better to concentrate more on the 41 merit gap, maybe using a few new tweaks to get to the 101 level. Almost 42 merit is totally different to 39 in terms of space.

robert44444uk 2022-01-17 08:56

A couple more, getting closer, but not that close

124977806 92
125316443 90

mart_r 2022-01-17 21:19

[QUOTE=robert44444uk;598157]A couple more, getting closer, but not that close

124977806 92
125316443 90[/QUOTE]
Any chance you can still tweak the algorithm a bit in your favor?


Some possibly useful ideas, theory-wise:

Let [$]p_n[/$] be large (well, say, > 10[SUP]6[/SUP]), let [$]p_{n+k}[/$] be the k-th prime after [$]p_n[/$] (k < [$]\sqrt{p_n}[/$], just to be safe).

[$]g = p_{n+k} - p_n[/$]

[$]m = G(p_{n+k})-G(p_n)-k+1[/$], G(x) being the formula for the blue line in the graph in [URL]http://www.primefan.ru/stuff/primes/table.html#theory[/URL]

[$]CSG^* = \frac{m \cdot |m|}{g}[/$]

[$]m^* = CSG^* \cdot \log p_{n+k}[/$]

One may be inclined to expect the distribution of [$]m^*[/$] as behaving like the ones for the merits of usual prime gaps. This may even be half-way right, as experimental data suggests, for example these 1,751,000 samples of intervals of length 10[SUP]5[/SUP] with p in the vicinity of 34*10[SUP]12[/SUP] can be turned into this (similar results for other parameters):

[CODE]m*< #times r (#/total) log(1-r)
1 1581721 0.903324386 -2.336394091 1)
2 1695327 0.968205026 -3.448447043
3 1731016 0.988587093 -4.473010379
4 1743358 0.995635637 -5.434282983
5 1747999 0.998286122 -6.368996766
6 1749835 0.999334666 -7.315221245
7 1750528 0.999730440 -8.218718626
8 1750830 0.999902913 -9.239899174
9 1750932 0.999961165 -10.15618991
10 1750978 0.999987436 -11.28465516
11 1750990 0.999994289 -12.07311252
12 1750995 0.999997144 -12.76625970
13 1750997 0.999998287 -13.27708532
14 1750998 0.999998858 -13.68255043
15 1750999 0.999999429 -14.37569761
16 1751000 1 [/CODE]1) Note: for usual gaps and merit < 1, log(1-r) would statistically be around -1, but [$]m^*[/$] is < 0 around half of the time, and due to the special treatment the distribution in the low range of [$]m^*[/$] is somewhat... different. Darn, I need to find the time to catch up on Gaussian/Poisson/... distribution measures.

Would that lead to a way to conjecture that the gaps between non-consecutive primes are bounded by a constant times [$](\log(p)+k) \cdot \log(p)[/$] ? Or am I thinking way too complicated?


All times are UTC. The time now is 22:57.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.