![]() |
n=1M-2M has now been fully sieved to P=180T. A link to a file is now in the first post here.
Let the drive continue! :smile: |
1 Attachment(s)
870k-900k results attached.
|
Reserving 1M-1010K.
This effort has grown quite difficult. For 1M-2M, we can only expect 0.378 primes, which gives us a 31.5% chance of getting at least one. At n=1M, it is about 778,000 digits, or 2.6 million bits. That would rank 48 on the top 5K. At n=2M, those numbers double, to 1.56 million digits and 5.2 million bits, and a prime would rank 20 on the top 5K. If we only have average luck, it could easily be n=16M (12.5M digits, nearly as large as M43112609, would rank 3 today) before we actually prove this, but we'll probably be down to 1 k by 4M or 8M. |
@Mini-Geek
Could you give timings for the candidates in your range please. :smile: |
[QUOTE=Flatlander;248401]@Mini-Geek
Could you give timings for the candidates in your range please. :smile:[/QUOTE] I haven't actually begun the range, (finishing other work) and I measured this at the very lowest candidate, so take these numbers with a grain of salt, but an estimate in PFGW (running it for 1 minute, seeing how many iterations it finished, and dividing appropriately) tells me it should be about [B]246 minutes[/B] per candidate. My CPU is an i5-750 @~2.67 GHz. The whole quad should average about one candidate complete per hour, or 23-24 per day. With 169 candidates in my range and four cores, that comes out to ~7.2 days for the range I chose. I probably should have checked the time [I]before[/I] I reserved it, but it's ok. :smile: If I get drastically different times once I start actually running it, I'll repost. By the way, the final numbers before we prove this are likely even bleaker than I first guessed, because k=1597 is about half the weight of k=36772. My rough estimate for how far we can expect to go before Riesel 6 is proved is now more like n=32M or 64M, which beat out today's largest known prime by nearly 2-4 times the number of digits. Still, it could happen in 5-20 years if interest doesn't fade too much; just 12 years ago the largest Mersenne prime was p~=3M, just a little larger than the base 6 n=1M number we're testing now. Judging very roughly from that, it's not inconceivable that we could finish this base at n=16M-64M within that sort of time period. |
Thanks for that. :smile:
Some of those stats are scary but at least '169 candidates in your range' is lower than I guessed. Amazing to think that some luck with k=1597 could knock several years off this drive. |
1 Attachment(s)
[QUOTE=Batalov;244874]Shall we continue with the two-or-bust?
I have some old k=1597 (poorly sieved, so there will be many unneeded PRP tests) data somewhere. I then went from 1M somewhere to ~1.1M until the running time per test doubled. But that was before the new GW library. Now, the tests will be faster. I will post these results when I'll find them.[/QUOTE] And here, they are, dated 2009-11-29. Tim, you can skip some of your tests and have some others already double-checked. ...or did you take my message as that I only [I]sieved[/I]? Why would I do that? |
[QUOTE=Flatlander;248492]Thanks for that. :smile:
Some of those stats are scary but at least '169 candidates in your range' is lower than I guessed. Amazing to think that some luck with k=1597 could knock several [B]years[/B] off this drive.[/QUOTE] Correction: decades. Maybe I should've calculated the numbers before posting in the first place, but anyway I did and this is what I found: For k=36772, the expected primes per doubling is about 0.253. This means that at any time you can expect to go to 2^(1/0.253)=15.48 times higher of an n than you're currently on to get a prime, so right now we'd have to go to 15.48M. For k=1597, the expected primes per doubling is about 0.121. This means that at any time you can expect to go to 2^(1/0.121)=306.54 times higher of an n than you're currently on to get a prime, so right now we'd have to go to 306.54M. When you expect 1 prime in a range, the chance of getting at least 1 prime is 1-(e^-1) or ~63.212%. Of course, when you consider both k's together we can expect a little over 1 prime before 16M, but it would probably come from the higher-weight k. If we're lucky with k=1597 and find a prime soon, we can probably finish this without needing to find a prime larger than the current world record. [QUOTE=Batalov;248503]And here, they are, dated 2009-11-29. Tim, you can skip some of your tests and have some others already double-checked. ...or did you take my message as that I only [I]sieved[/I]? Why would I do that?[/QUOTE] I'll skip the previously tested ones (instead of double-checking). I suppose I didn't read and/or notice your message. 6 of the 37 you did in the 1M-1010K range were not in the most current sieve file, I suppose they were factored; the other 31 have been removed from my testing. |
Are we anywhere near the point where it would be worth double-checking k=1597?
I mean, When would the probability of finding a missed prime during a month, say, of double-checking all previous k=1597 be greater than finding a new prime during a month of new work? Well I know what I mean anyway. lol |
[QUOTE=Batalov;248503]And here, they are, dated 2009-11-29. Tim, you can skip some of your tests and have some others already double-checked.
...or did you take my message as that I only [I]sieved[/I]? Why would I do that?[/QUOTE] I chose to ignore your posting because I did not think you still had the results file and I figured that it may have been tested with one of the versions of PFGW that had a known bug that would cause it to miscalculate residues. Now that you have posted it, I see that didn't test anywhere near k=1597 for n=1M-1.1M and you also tested some small range n>1.28M. Tim, my preference as this point is to test all of the pairs in the sieve file. Since there were some bugs in PFGW (and I think with LLR) back in 2009 before PFGW version 3.4, I would kindly request a doublecheck of the few pairs that have already been tested. Thanks! Gary |
[QUOTE=gd_barnes;248553]I chose to ignore your posting because I did not think you still had the results file and I figured that it may have been tested with one of the versions of PFGW that had a known bug that would cause it to miscalculate residues. Now that you have posted it, I see that didn't test anywhere near k=1597 for n=1M-1.1M and you also tested some small range n>1.28M.
Tim, my preference as this point is to test all of the pairs in the sieve file. Since there were some bugs in PFGW (and I think with LLR) back in 2009 before PFGW version 3.4, I would kindly request a doublecheck of the few pairs that have already been tested. Thanks! Gary[/QUOTE] Not to mention that re-doing the pairs will make it rather easier for me to process the results when the range is done (assuming that it's being run through PRPnet). :smile: |
| All times are UTC. The time now is 21:50. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.