20201127, 13:58  #199 
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
5,869 Posts 
Have just submitted 177 new gaps of which 23 are new gap lengths(assuming I counted correctly). This might take a while for the queue to clear.
How deep does Danaj's surround_primes function sieve for gaps of length 150000? Also is there a way of determining what % of time is spent sieving vs prp testing by this function? I am wondering where the crossover point is for switching to testing using pfgw. 
20201127, 22:54  #200  
"Seth"
Apr 2019
2^{4}×17 Posts 
Quote:
Code:
def sieve_depth_pg(log2n): log2log2n = int(math.ceil(math.log2(log2n))) return log2n * (log2n >> 5) * int(log2log2n * 1.5) >> 1 I believe % time in sieve is very small (0.1%  1%). 

20201128, 13:48  #201 
Jun 2003
Oxford, UK
1,933 Posts 
For very large gaps I used the sieve_primes routine and then pfgw. Here is a ready to use perl program  clearly change your variables. It produces a pfgwready file.
Code:
#!/usr/bin/env perl # takes a specific A  the best of a range of A already tested for candidates of type A*p#/46410 and sieves deep. 1tn takes about 14 hours. 50m takes 70 minutes. use warnings; use strict; use Math::BigFloat lib=>"GMP"; use Math::Prime::Util qw/:all/; use Math::GMPz; use feature ':5.10'; use File::Slurp; use ntheory ":all"; $ = 1; my $mult = 1787; my $prim = 80021; my $range = 1_021_020; my $sievelim = 1_000_000_000_000; my $div = 46410; my $fact = Math::GMPz>new("".primorial($prim)); my $filetitle = "ABC"." ".$mult."*".$prim."#"."/".$div."+".q[$a]; my $n = $mult*$fact/$div; my @f = Math::Prime::Util::GMP::sieve_primes($n($range/2), $n+($range/2), $sievelim); foreach my $item (@f) { $item = $item$n; } #say scalar(@f); #print "@f\n"; my $filename = '1787to1tn.txt'; open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh "$filetitle\n"; print $fh join "\n", @f; close $fh; print "done\n"; 
20201201, 09:59  #202  
"Seth"
Apr 2019
420_{8} Posts 
Quote:
I can't run perl right now on my computer, would you mind sharing what the result looks like for a 17 * 40009#/30030 + 4000 also a rough runtime (Xminutes). That would be much appreciated. 

20201202, 08:00  #203  
Jun 2003
Oxford, UK
11110001101_{2} Posts 
Quote:
My strategy looked at a range of 1 million and sieved lightly over many $mult (typically 1 to 20000) to get the candidates that left the smallest remaining values for pfgw. I had a separate program for that. Code:
ABC 17*40009#/30030+$a 4000 3900 3840 3750 3744 3640 3564 3528 3328 3300 3168 3150 3072 2880 2808 2800 2772 2640 2560 2500 2464 2430 2268 2184 2178 2160 2112 2100 2080 2058 2028 2002 1980 1944 1890 1872 1848 1800 1792 1782 1690 1540 1452 1408 1404 1344 1320 1260 1210 1200 1188 1152 1144 1134 1120 1092 1014 1008 990 960 924 910 900 864 858 840 832 810 780 768 700 640 624 600 594 550 528 490 480 468 462 448 432 420 378 330 312 288 252 250 234 180 168 160 144 132 130 120 112 108 78 64 48 40 28 22 18 18 20 30 36 48 56 60 66 90 108 126 128 156 162 168 192 216 242 288 308 360 396 420 450 480 486 500 540 546 560 648 650 660 702 750 840 882 896 960 990 1008 1080 1100 1152 1176 1200 1248 1250 1320 1352 1386 1430 1452 1500 1512 1560 1728 1782 1820 1890 1980 2058 2100 2156 2160 2178 2268 2288 2310 2352 2430 2450 2520 2592 2640 2688 2730 2880 2912 2916 2940 2970 3150 3168 3200 3276 3300 3402 3510 3528 3696 3780 3822 3840 3872 3888 Last fiddled with by robert44444uk on 20201202 at 08:01 

20201202, 09:03  #204  
"Seth"
Apr 2019
2^{4}·17 Posts 
Quote:


20201202, 09:48  #205 
"Seth"
Apr 2019
2^{4}×17 Posts 
Lines on merit graph
Most of my work is on a single K=P#/d (see how these form lines on this graph)
I was curious which primorial had the most records so I wrote this ugly command. Code:
$ sqlite3 gaps.db 'SELECT startprime FROM gaps'  sed n 's!^[^/]*[^09/]\([09]\+#\).*!\1!p'  sort  uniq c  sort nr  awk '$1 > 200 { ("sqlite3 gaps.db \"SELECT discoverer, count(*) FROM gaps WHERE startprime like \\\"%" $2 "%\\\" GROUP BY 1 ORDER BY 2 DESC LIMIT 1\"")  getline freq; print($0 "\t" freq); }' 1361 4441# S.Troisi1310 962 9439# Jacobsen868 925 6761# Jacobsen913 786 49999# Rosnthal786 739 4139# Jacobsen717 617 2221# S.Troisi537 579 9973# Rosnthal525 535 10343# Jacobsen528 400 6199# DStevens370 376 6661# S.Troisi322 372 5003# Jacobsen344 351 9629# RobSmith269 350 10709# RobSmith334 276 4409# Rosnthal229 262 23197# RobSmith258 240 5333# S.Troisi219 228 18481# PierCami226 223 20963# RobSmith221 217 8887# S.Troisi199 216 3331# Jacobsen186 211 23189# RobSmith209 206 5557# S.Troisi174 
20201205, 04:31  #206 
"Seth"
Apr 2019
110_{16} Posts 
I created a colab script to plot the improvements to the records over the last year and turn it into a animation similar to mersenne.ca

20201205, 10:24  #207  
Jun 2003
Oxford, UK
1,933 Posts 
Quote:


20201213, 13:47  #208 
"Seth"
Apr 2019
110_{16} Posts 
Let's do some end of year celebration of progress. 🥳🎁📅🎉🎊
I'll update these in late December * by year merit status * prime gap animation I calculated * Number of updates this year >12,000! * Largest merit this year (Seth's 38.0479) * largest gap found this year (Martin's 1898630) * Smallest update to a record (#173, Gapcoint +0.00025 merit) * Smallest gap updated this year (Rob Smith's 1906 merit 26.7) I still need to find * largest update to a record (Probably 122542 14.5 > 28.1 merit) * largest gap with an update * Total merit improved this year Any other fun statistics people would like me to calculate? Humblebrag: I submitted 1340 records for 4441# and 930 for 2221# with an average merit of 20.8 and 24.37 respectively. Code:
$ sqlite3 gaps.db 'SELECT startprime FROM gaps'  sed n 's!^[^/]*[^09/]\([09]\+#\).*!\1!p'  sort  uniq c  sort nr  awk '$1 > 200 { ("sqlite3 gaps.db \"SELECT discoverer, count(*),round(max(merit),2),round(avg(merit),2) FROM gaps WHERE startprime like \\\"%" $2 "%\\\" GROUP BY 1 ORDER BY 2 DESC LIMIT 1\"")  getline freq; print($0 "\t" freq); }' count primorial WhoHowManyTheydidMaxMeritAvgMerit 1391 4441# S.Troisi134028.9820.82 1002 2221# S.Troisi93033.0324.37 962 9439# Jacobsen86824.2716.32 923 6761# Jacobsen91125.217.67 786 49999# Rosnthal78625.8612.62 736 4139# Jacobsen71427.020.05 579 9973# Rosnthal52525.1716.41 535 10343# Jacobsen52825.5516.37 400 6199# DStevens37027.2718.99 376 6661# S.Troisi32227.3118.02 371 5003# Jacobsen34326.4219.71 351 9629# RobSmith26926.5216.87 276 4409# Rosnthal22927.0719.95 228 18481# PierCami22619.1212.15 
20201229, 20:54  #209 
"Seth"
Apr 2019
110_{16} Posts 
I updated two tables on the lists site
"Top 20 gaps by merit" is now "Top 20 largest gaps ordered by merit" https://primegaplistproject.github...gapsbymerit/ "Top 20 overall merits" is now "Top 20 overall merits [plus 50th,100th,200th,...,500th]" https://primegaplistproject.github...verallmerits/ I also added a weighted average line to the merit plot on cloudygo and min_x, max_x input. https://primegaps.cloudygo.com/graphs?max=70000 You can see where the focused efforts from gapcoin & S.Troisi have increased average merit in the neighborhood of 50006000 and 4500052000 Last fiddled with by SethTro on 20201229 at 20:54 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
News  gd_barnes  Conjectures 'R Us  299  20210219 09:30 
News  gd_barnes  No Prime Left Behind  251  20210215 03:00 
P!=NP in the news  willmore  Computer Science & Computational Number Theory  48  20100919 08:30 
The news giveth, the news taketh away...  NBtarheel_33  Hardware  17  20090504 15:52 
Some news about Home Prime ?  MoZ  Factoring  6  20060228 12:02 