![]() |
Status sierpinski base 63:
all k<=10.2M has been tested to n=1000. Approximately 55,000 k's remain for further testing. Also I expect this conjecture to be proven with n<=1,000,000,000 as the highest n-value. My current statistical prediction suggests that there should remain 220 k's approximately at n=1,024,000. The prediction for n=25000 is approximately 10,000 k's remaining. Also on a side note this conjecture should yield at least 439 Megaprimes :smile: Regards KEP |
[quote=KEP;158710]Status sierpinski base 63:
all k<=10.2M has been tested to n=1000. Approximately 55,000 k's remain for further testing. Also I expect this conjecture to be proven with n<=1,000,000,000 as the highest n-value. My current statistical prediction suggests that there should remain 220 k's approximately at n=1,024,000. The prediction for n=25000 is approximately 10,000 k's remaining. Also on a side note this conjecture should yield at least 439 Megaprimes :smile: Regards KEP[/quote] The conjecture is 37565868; 3.7 times as high as what you have tested hence there should be nearly 200,000 k's remaining at n=1000 when the entire k-range is tested. So you're telling me that base as large as 63, even though it's a 2^q-1 base, can drop 95% of its k's remaining between n=1000 and n=25000? This I have to see. Can you show how you came up with your calculations? Base 31, a very prime base like all 2^q-1 bases, has been dropping ~40-45% of it's k's for every doubling of the n-range up to n=10K; 1934 k's now remain. For base 63 to do what you're saying, it would have to continue dropping nearly half of it's k's for each doubleing of the n-range. That would be most unusual for a base so high to continue up to n=25000 and for it to be even more prime than base 31. What I've found is that my prior calculations aren't quite accurate. I had assumed a consistent percentage reduction in k's with a set multiplier increase in n-value. That is not quite true. As you go higher, more low-weight k's remain; hence the percentage drops also. So where you might see a 50% drop in k's remaining from n=10K-20K, you might only see a 45% drop from 20K-40K and a 40% drop from 40K-80K. For this reason, I expect that base 63 won't be proven until n>1e12 or much higher. Gary |
[QUOTE=gd_barnes;158803]The conjecture is 37565868; 3.7 times as high as what you have tested hence there should be nearly 200,000 k's remaining at n=1000 when the entire k-range is tested. So you're telling me that base as large as 63, even though it's a 2^q-1 base, can drop 95% of its k's remaining between n=1000 and n=25000?
This I have to see. Can you show how you came up with your calculations? Base 31, a very prime base like all 2^q-1 bases, has been dropping ~40-45% of it's k's for every doubling of the n-range up to n=10K; 1934 k's now remain. For base 63 to do what you're saying, it would have to continue dropping nearly half of it's k's for each doubleing of the n-range. That would be most unusual for a base so high to continue up to n=25000 and for it to be even more prime than base 31. What I've found is that my prior calculations aren't quite accurate. I had assumed a consistent percentage reduction in k's with a set multiplier increase in n-value. That is not quite true. As you go higher, more low-weight k's remain; hence the percentage drops also. So where you might see a 50% drop in k's remaining from n=10K-20K, you might only see a 45% drop from 20K-40K and a 40% drop from 40K-80K. For this reason, I expect that base 63 won't be proven until n>1e12 or much higher. Gary[/QUOTE] Well the 55000 k's remaining I came up to using a spreadsheet where I removed every k mod 31 = 30. This "operation" removed ~3/4 of all candidates showing as remaining at n<=1,000 in the NOprimes.out file. My prediction for n<=1,000,000,000 I came up with using a 50% reduction, since I remember that you suggested this regarding the base 3 conjecture, as a reasonable reduction. I might have been wrong on this estimate, however the future will only really be able to tell. For my n<=1,000,000,000 prediction I used an estimate of 10,000 k's remaining at n=25K and 225,000 k's remaining at n<=1K. Both calculations had <1 k remaining for n<=1,000,000,000 Here is the top n's for each calculation: 225K k's remaining: n<=943.372.658 10K k's remaining: n<=737.009.889 I guess you may be right Gary that the reduction will not be as steady around 50%, so we just has to leave it to the future to prove this conjecture to. On a side note, riesel base 3 k=3677878 was on my dual core testing around 343K as of yesterday. However I hoped to be able to gain some progress and some speed using Proth. But sadly proth version 0.65 was 10% longer to do a single test compared to LLR, so no gain using proth, just really hopes that Proth can do faster than LLR once we start testing Sierp base 63 higher than n<=25K or whenever I decide to abandone this conjecture :smile: Regards KEP |
[quote=KEP;158824]On a side note, riesel base 3 k=3677878 was on my dual core testing around 343K as of yesterday. However I hoped to be able to gain some progress and some speed using Proth. But sadly proth version 0.65 was 10% longer to do a single test compared to LLR, so no gain using proth, just really hopes that Proth can do faster than LLR once we start testing Sierp base 63 higher than n<=25K or whenever I decide to abandone this conjecture :smile:[/quote]
Hmm...were you using *Phrot* or *Proth* for your tests? Because whereas Phrot is usually faster than LLR, the much older Proth.exe program is waaaaaaay slower. :smile: |
[QUOTE=mdettweiler;158833]Hmm...were you using *Phrot* or *Proth* for your tests? Because whereas Phrot is usually faster than LLR, the much older Proth.exe program is waaaaaaay slower. :smile:[/QUOTE]
I was using Phrot version 0.65 published at Geoffs site. It was actually using ~2700 seconds per test compared to LLRs ~2500 seconds per test, so it was a loss of speed of approximately (give and take) 10% per test for k=3677878 for Riesel base 3. I suspects that it may have something to do with the FFT boundrys, since the the k value were transformed to several trillion (if my memory doesen't play me a trick) :smile: On the good side only one test were carried out using phrot version 0.65 before switching back to LLR version 3.7.1 so only a small amount of time was waisted. Also I actually obtained an LLR residual, so the tests weren't completely a waist after all. Regards KEP |
[quote=KEP;158847]I was using Phrot version 0.65 published at Geoffs site. It was actually using ~2700 seconds per test compared to LLRs ~2500 seconds per test, so it was a loss of speed of approximately (give and take) 10% per test for k=3677878 for Riesel base 3. I suspects that it may have something to do with the FFT boundrys, since the the k value were transformed to several trillion (if my memory doesen't play me a trick) :smile:
On the good side only one test were carried out using phrot version 0.65 before switching back to LLR version 3.7.1 so only a small amount of time was waisted. Also I actually obtained an LLR residual, so the tests weren't completely a waist after all. Regards KEP[/quote] Hmm...that is weird. I'd suggest posting about it in the [url=http://www.mersenneforum.org/showthread.php?t=11264]Phrot Announcments[/url] thread, so Rogue can take a look at it and see what may be causing this slowdown. |
[QUOTE=mdettweiler;158848]Hmm...that is weird. I'd suggest posting about it in the [url=http://www.mersenneforum.org/showthread.php?t=11264]Phrot Announcments[/url] thread, so Rogue can take a look at it and see what may be causing this slowdown.[/QUOTE]
It might have something to do with Geoff's build. If you can give me a couple of examples of where LLR is faster, I can investigate. |
I have experienced the same thing. For my current "squares" test of both Riesel and Sierp base 3, I am using LLR for that reason.
I have found Phrot to be slower for base 3 but faster for all the other bases that I have tested that are not powers of 2. Kenneth, there is a BIG different between Proth and Phrot! Gary |
[quote=KEP;158824]Well the 55000 k's remaining I came up to using a spreadsheet where I removed every k mod 31 = 30. This "operation" removed ~3/4 of all candidates showing as remaining at n<=1,000 in the NOprimes.out file.
My prediction for n<=1,000,000,000 I came up with using a 50% reduction, since I remember that you suggested this regarding the base 3 conjecture, as a reasonable reduction. I might have been wrong on this estimate, however the future will only really be able to tell. For my n<=1,000,000,000 prediction I used an estimate of 10,000 k's remaining at n=25K and 225,000 k's remaining at n<=1K. Both calculations had <1 k remaining for n<=1,000,000,000 Here is the top n's for each calculation: 225K k's remaining: n<=943.372.658 10K k's remaining: n<=737.009.889 I guess you may be right Gary that the reduction will not be as steady around 50%, so we just has to leave it to the future to prove this conjecture to. On a side note, riesel base 3 k=3677878 was on my dual core testing around 343K as of yesterday. However I hoped to be able to gain some progress and some speed using Proth. But sadly proth version 0.65 was 10% longer to do a single test compared to LLR, so no gain using proth, just really hopes that Proth can do faster than LLR once we start testing Sierp base 63 higher than n<=25K or whenever I decide to abandone this conjecture :smile: Regards KEP[/quote] Base 63 is not base 3. The 50% reduction method was ONLY for base 3. Each base has it's own percentage reduction. (Actually, for base 3, it's closer to 60% for n=25K-100K but I used 50% to make the example easy.) You need to use the specific reduction that is applicable to your base. For most higher bases, it's closer to 20% although for a 2^q-1 base 63, it might be 40-50%. I can tell you for your base 255 that I am just now completing to n=5000, it is slightly under 20% and that is very high for such a huge base! Many bases > 200 will be 10% or less. Here's what you need to do: 1. Look at the # of k's remaining at n=500 after removing all k==30 mod 31. 2. Look at the # of k's remaining at n=1000 after removing all k==30 mod 31. See what the percentage reduction in k's is there and then you'll have a reasonable estimate for base 63. BTW, the drop in the percentage reduction is quite small especially at the low n-ranges. For instance, if you find that you drop 40% of remaining k's for n=500-1000, you might drop 39% of remaining k's for n=1000-2000, 38% for n=2000-4000, etc. It's only as you get down to very few k's remaining that the percentage drops greatly upon each doubling of then-range. That's why finding primes for the last 1-2 k's is frequently so difficult...frequently because they are much lower weight than any of the rest. For base 63, we might have 2 k's remaining at n=10^12 but not find a prime on them for 3 more powers of 10 up to n=10^15. (Highly possible; not that we're likely to ever know. lol) Gary |
Kenneth,
I ran a quickie test on Sierp base 63 for k<2000 up to n=3200. A good percentage reduction in k's remaining for every doubling of the n-value on base 63 is ~37%. With base 3 at ~60% and base 31 at ~48-50%, that is about what I would expect for base 63, another very prime 2^q-1 base. Gary |
@Gary:
Wow that was a lot of very insightfull answers. I'll try in a couple of months, to find a way to put the amount of candidates removed and remaining in to a spreadsheet for following n's: n=1 n=2 n=3 to 4 n=5 to 8 n=9 to 16 n=17 to 32 n=33 to 64 n=65 to 128 n=129 to 256 n=257 to 512 By finding out how many candidates is removed and remaining for each doubeling in n, eventually a more accurate prediction will be able to be projectured for Sierp. base 63. However as you state, none of us is likely to ever know if our predictions for a highest completion n, is ever correct estimated. But it will be a couple of months before I can make this kind of prediction. Now I'll go dig up the n value for base 3 to Rogue, so he can start his investigation :smile: Kenneth! |
| All times are UTC. The time now is 21:50. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.