![]() |
![]() |
#34 |
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
3×1,951 Posts |
![]()
I will have a go at this over the Christmas break(starts Friday for me) I have some git experience so I should be able to work that out. Part of my issue will be lack of python knowledge although long term it probably is a language I should learn.
|
![]() |
![]() |
![]() |
#35 |
May 2018
22·53 Posts |
![]()
Can we use this code to finally find maximal prime gaps greater than 264?
|
![]() |
![]() |
![]() |
#36 |
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
3×1,951 Posts |
![]() |
![]() |
![]() |
![]() |
#37 |
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
3·1,951 Posts |
![]()
I am having a few issues with predictions not matching reality.
Why is sum(prob_minmerit) being overestimated so much in this case? I am searching m * 9973#/208110 +- x. min_merit has been set quite high at 20. Does the min_merit I used for the stats step make any difference? Code:
sum(prob_minmerit): 114.86, 73.7/day found: 3 sum(prob_record): 58.263, 37.4/day found: 58 |
![]() |
![]() |
![]() |
#38 | |
"Seth"
Apr 2019
10B16 Posts |
![]() Quote:
I am glad to see prob_records closely matching predictions, I'm running into a lot of issues with it not working in my runs for various real but frustrating to fix reasons. Sorry the graphs are broken, I made an optimization not to record the probabilities of small gaps when sieve_length is > 100,000 in gap_stats (see here) this makes gap_stats faster at the cost of breaking this graph. You can disable that code by changing line 1034 to `size_t j = 0` this will be slower but will always record the probabilities. |
|
![]() |
![]() |
![]() |
#39 | |
"Seth"
Apr 2019
3×89 Posts |
![]() Quote:
In the future when using very large SL the graphs will still be truncated but all the probability for the truncated values is still included so they will be normalized correctly. See the attached photo |
|
![]() |
![]() |
![]() |
#40 |
"Seth"
Apr 2019
1000010112 Posts |
![]()
Took me a couple of days to understand this, you are running with --one-side-skip (or maybe more precisely you are running without --no-one-side-skip). This means that 99% of the time you skip finding the gap to next_prime (because the gap to prev_prime is small) this skews the observed gaps to be much larger. I updated the code so it changes the label which hopefully makes this more apparent and normalizes by the number of m's tested.
I'm not sure if there's something better I could do but they should now roughly match with large gaps being slightly over represented (because we are finding more than expected) |
![]() |
![]() |
![]() |
#41 |
"Seth"
Apr 2019
3×89 Posts |
![]()
I optimized handling of medium primes section of the code and got a 30-40% improvement! For long searches (m_inc > 1M) this is probably more than a 10% overall speedup which I'm very excited about improvement!
I also added short flags; `--save` instead of `--save-unknowns` and `-u` instead of `--unknown-filename`. There are a handful of other changes better combined_sieve time estimation, better plotting (mentioned above), the largest record found with record_check, warnings if sieve_length is unreasonable sized, and a bunch of other things. I'd encourage everyone to `git pull` for the newest version. |
![]() |
![]() |
![]() |
#42 |
"Seth"
Apr 2019
1000010112 Posts |
![]()
I made a number of improvements over the last couple of weeks.
Most importantly `combined_sieve` and `gap_stats` are now multithreaded! (you might need to `sudo apt install libomp-dev`) I test very long intervals (e.g. m_inc > 10 million) which generally took more than a day to finish which left me juggling running multiple combined_sieve / gap_stats / gap_tests at the same time to keep all threads on my computer active. Now you can sieve, stats, and test an interval with one command `./misc/run.sh -t 4 -u 907_210_1_15000_s7554_l1000M.txt` I've also slightly improved `misc/record_check.py` to include largest and smallest record and number of unique records. |
![]() |
![]() |
![]() |
#43 | |
Jun 2003
Oxford, UK
22·3·7·23 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#44 | |
"Seth"
Apr 2019
3·89 Posts |
![]() Quote:
time ./combined_sieve -t $THREADS --save -u "$UNKNOWN_FN" time ./gap_stats -t $THREADS --save -u "$UNKNOWN_FN" time ./gap_test.py -t $THREADS -u "$UNKNOWN_FN" I can add support for `--prp-top-percent` and `--min-merit` this week. It doesn't have any additional resume behavior (`combined_sieve` has none but if complete wouldn't rerun, `gap_stats` is generally quite fast and doesn't rerun if already finished, `gap_test.py` caches all it's progress and resumes. |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
UPS /combined UPS and PS for P95 computer | Christenson | Hardware | 12 | 2011-10-27 03:41 |
Combined sieving discussion | ltd | Prime Sierpinski Project | 76 | 2008-07-25 11:44 |
Combined Sieve Guide Discussion | Joe O | Prime Sierpinski Project | 35 | 2006-09-01 13:44 |
Combined Sieving? | jaat | Sierpinski/Riesel Base 5 | 5 | 2006-04-18 02:26 |
Sieve discussion Meaning of first/second pass, combined | Citrix | Prime Sierpinski Project | 14 | 2005-12-31 19:39 |