![]() |
![]() |
#199 |
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
2×2,909 Posts |
![]()
Have just submitted 177 new gaps of which 23 are new gap lengths(assuming I counted correctly). This might take a while for the queue to clear.
How deep does Danaj's surround_primes function sieve for gaps of length 150000? Also is there a way of determining what % of time is spent sieving vs prp testing by this function? I am wondering where the cross-over point is for switching to testing using pfgw. |
![]() |
![]() |
![]() |
#200 | |
"Seth"
Apr 2019
2×53 Posts |
![]() Quote:
Code:
def sieve_depth_pg(log2n): log2log2n = int(math.ceil(math.log2(log2n))) return log2n * (log2n >> 5) * int(log2log2n * 1.5) >> 1 I believe % time in sieve is very small (0.1% - 1%). |
|
![]() |
![]() |
![]() |
#201 |
Jun 2003
Oxford, UK
111100001002 Posts |
![]()
For very large gaps I used the sieve_primes routine and then pfgw. Here is a ready to use perl program - clearly change your variables. It produces a pfgw-ready file.
Code:
#!/usr/bin/env perl # takes a specific A - the best of a range of A already tested for candidates of type A*p#/46410 and sieves deep. 1tn takes about 14 hours. 50m takes 70 minutes. use warnings; use strict; use Math::BigFloat lib=>"GMP"; use Math::Prime::Util qw/:all/; use Math::GMPz; use feature ':5.10'; use File::Slurp; use ntheory ":all"; $| = 1; my $mult = 1787; my $prim = 80021; my $range = 1_021_020; my $sievelim = 1_000_000_000_000; my $div = 46410; my $fact = Math::GMPz->new("".primorial($prim)); my $filetitle = "ABC"." ".$mult."*".$prim."#"."/".$div."+".q[$a]; my $n = $mult*$fact/$div; my @f = Math::Prime::Util::GMP::sieve_primes($n-($range/2), $n+($range/2), $sievelim); foreach my $item (@f) { $item = $item-$n; } #say scalar(@f); #print "@f\n"; my $filename = '1787to1tn.txt'; open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh "$filetitle\n"; print $fh join "\n", @f; close $fh; print "done\n"; |
![]() |
![]() |
![]() |
#202 | |
"Seth"
Apr 2019
2×53 Posts |
![]() Quote:
I can't run perl right now on my computer, would you mind sharing what the result looks like for a 17 * 40009#/30030 +- 4000 also a rough runtime (Xminutes). That would be much appreciated. |
|
![]() |
![]() |
![]() |
#203 | |
Jun 2003
Oxford, UK
22×13×37 Posts |
![]() Quote:
My strategy looked at a range of 1 million and sieved lightly over many $mult (typically 1 to 20000) to get the candidates that left the smallest remaining values for pfgw. I had a separate program for that. Code:
ABC 17*40009#/30030+$a -4000 -3900 -3840 -3750 -3744 -3640 -3564 -3528 -3328 -3300 -3168 -3150 -3072 -2880 -2808 -2800 -2772 -2640 -2560 -2500 -2464 -2430 -2268 -2184 -2178 -2160 -2112 -2100 -2080 -2058 -2028 -2002 -1980 -1944 -1890 -1872 -1848 -1800 -1792 -1782 -1690 -1540 -1452 -1408 -1404 -1344 -1320 -1260 -1210 -1200 -1188 -1152 -1144 -1134 -1120 -1092 -1014 -1008 -990 -960 -924 -910 -900 -864 -858 -840 -832 -810 -780 -768 -700 -640 -624 -600 -594 -550 -528 -490 -480 -468 -462 -448 -432 -420 -378 -330 -312 -288 -252 -250 -234 -180 -168 -160 -144 -132 -130 -120 -112 -108 -78 -64 -48 -40 -28 -22 -18 18 20 30 36 48 56 60 66 90 108 126 128 156 162 168 192 216 242 288 308 360 396 420 450 480 486 500 540 546 560 648 650 660 702 750 840 882 896 960 990 1008 1080 1100 1152 1176 1200 1248 1250 1320 1352 1386 1430 1452 1500 1512 1560 1728 1782 1820 1890 1980 2058 2100 2156 2160 2178 2268 2288 2310 2352 2430 2450 2520 2592 2640 2688 2730 2880 2912 2916 2940 2970 3150 3168 3200 3276 3300 3402 3510 3528 3696 3780 3822 3840 3872 3888 Last fiddled with by robert44444uk on 2020-12-02 at 08:01 |
|
![]() |
![]() |
![]() |
#204 | |
"Seth"
Apr 2019
2·53 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#205 |
"Seth"
Apr 2019
2×53 Posts |
![]()
Most of my work is on a single K=P#/d (see how these form lines on this graph)
I was curious which primorial had the most records so I wrote this ugly command. Code:
$ sqlite3 gaps.db 'SELECT startprime FROM gaps' | sed -n 's!^[^/]*[^0-9/]\([0-9]\+#\).*!\1!p' | sort | uniq -c | sort -nr | awk '$1 > 200 { ("sqlite3 gaps.db \"SELECT discoverer, count(*) FROM gaps WHERE startprime like \\\"%" $2 "%\\\" GROUP BY 1 ORDER BY 2 DESC LIMIT 1\"") | getline freq; print($0 "\t" freq); }' 1361 4441# S.Troisi|1310 962 9439# Jacobsen|868 925 6761# Jacobsen|913 786 49999# Rosnthal|786 739 4139# Jacobsen|717 617 2221# S.Troisi|537 579 9973# Rosnthal|525 535 10343# Jacobsen|528 400 6199# DStevens|370 376 6661# S.Troisi|322 372 5003# Jacobsen|344 351 9629# RobSmith|269 350 10709# RobSmith|334 276 4409# Rosnthal|229 262 23197# RobSmith|258 240 5333# S.Troisi|219 228 18481# PierCami|226 223 20963# RobSmith|221 217 8887# S.Troisi|199 216 3331# Jacobsen|186 211 23189# RobSmith|209 206 5557# S.Troisi|174 |
![]() |
![]() |
![]() |
#206 |
"Seth"
Apr 2019
2×53 Posts |
![]()
I created a colab script to plot the improvements to the records over the last year and turn it into a animation similar to mersenne.ca
|
![]() |
![]() |
![]() |
#207 | |
Jun 2003
Oxford, UK
111100001002 Posts |
![]() Quote:
![]() |
|
![]() |
![]() |
![]() |
#208 |
"Seth"
Apr 2019
2×53 Posts |
![]()
Let's do some end of year celebration of progress. 🥳🎁📅🎉🎊
I'll update these in late December * by year merit status * prime gap animation I calculated * Number of updates this year >12,000! * Largest merit this year (Seth's 38.0479) * largest gap found this year (Martin's 1898630) * Smallest update to a record (#173, Gapcoint +0.00025 merit) * Smallest gap updated this year (Rob Smith's 1906 merit 26.7) I still need to find * largest update to a record (Probably 122542 14.5 -> 28.1 merit) * largest gap with an update * Total merit improved this year Any other fun statistics people would like me to calculate? Humblebrag: I submitted 1340 records for 4441# and 930 for 2221# with an average merit of 20.8 and 24.37 respectively. Code:
$ sqlite3 gaps.db 'SELECT startprime FROM gaps' | sed -n 's!^[^/]*[^0-9/]\([0-9]\+#\).*!\1!p' | sort | uniq -c | sort -nr | awk '$1 > 200 { ("sqlite3 gaps.db \"SELECT discoverer, count(*),round(max(merit),2),round(avg(merit),2) FROM gaps WHERE startprime like \\\"%" $2 "%\\\" GROUP BY 1 ORDER BY 2 DESC LIMIT 1\"") | getline freq; print($0 "\t" freq); }' count primorial Who|HowManyTheydid|MaxMerit|AvgMerit 1391 4441# S.Troisi|1340|28.98|20.82 1002 2221# S.Troisi|930|33.03|24.37 962 9439# Jacobsen|868|24.27|16.32 923 6761# Jacobsen|911|25.2|17.67 786 49999# Rosnthal|786|25.86|12.62 736 4139# Jacobsen|714|27.0|20.05 579 9973# Rosnthal|525|25.17|16.41 535 10343# Jacobsen|528|25.55|16.37 400 6199# DStevens|370|27.27|18.99 376 6661# S.Troisi|322|27.31|18.02 371 5003# Jacobsen|343|26.42|19.71 351 9629# RobSmith|269|26.52|16.87 276 4409# Rosnthal|229|27.07|19.95 228 18481# PierCami|226|19.12|12.15 |
![]() |
![]() |
![]() |
#209 |
"Seth"
Apr 2019
25010 Posts |
![]()
I updated two tables on the lists site
"Top 20 gaps by merit" is now "Top 20 largest gaps ordered by merit" https://primegap-list-project.github...gaps-by-merit/ "Top 20 overall merits" is now "Top 20 overall merits [plus 50th,100th,200th,...,500th]" https://primegap-list-project.github...verall-merits/ I also added a weighted average line to the merit plot on cloudygo and min_x, max_x input. https://primegaps.cloudygo.com/graphs?max=70000 You can see where the focused efforts from gapcoin & S.Troisi have increased average merit in the neighborhood of 5000-6000 and 45000-52000 Last fiddled with by SethTro on 2020-12-29 at 20:54 |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
News | gd_barnes | Conjectures 'R Us | 299 | 2021-02-19 09:30 |
News | gd_barnes | No Prime Left Behind | 251 | 2021-02-15 03:00 |
P!=NP in the news | willmore | Computer Science & Computational Number Theory | 48 | 2010-09-19 08:30 |
The news giveth, the news taketh away... | NBtarheel_33 | Hardware | 17 | 2009-05-04 15:52 |
Some news about Home Prime ? | MoZ | Factoring | 6 | 2006-02-28 12:02 |