Odds of prime discussion
I calculated the chance of finding prime for Top 20 Conjectures with 1k Remaining by Highest and Lowest Conjectured k for their next 100k test ranges.
R620 has the highest chance, 11.42%. I've started testing it. Of course the higher the weight and higher the range, the longer to test. 
[QUOTE=TheCount;354471]I calculated the chance of finding prime for Top 20 Conjectures with 1k Remaining by Highest and Lowest Conjectured k for their next 100k test ranges.
R620 has the highest chance, 11.42%. I've started testing it. Of course the higher the weight and higher the range, the longer to test.[/QUOTE] Don't R702 and R916 have a much higher weight? They also have one k remaining at n=100K. 
[QUOTE=gd_barnes;354490]Don't R702 and R916 have a much higher weight? They also have one k remaining at n=100K.[/QUOTE]
They are also higher bases which means larger tests. Maybe that swung the balance. 
[QUOTE=henryzz;354492]They are also higher bases which means larger tests. Maybe that swung the balance.[/QUOTE]
There is very little testing time difference between base 620 and 702. At n=150000, base 620 is ~419,000 digits and base 702 is ~427,000 digits. 
Probability
[QUOTE=gd_barnes;354490]Don't R702 and R916 have a much higher weight? They also have one k remaining at n=100K.[/QUOTE]
R702 and R916 are in the list of "Top 20 Conjectures with 1k Remaining by Highest Weight": [URL]http://www.noprimeleftbehind.net/crus/vstats_new/crustop20.htm#Table25[/URL] As stated in my post I only calculated the probabilities based on these tables:  "Top 20 Conjectures with 1k Remaining by Highest Conjectured k": [URL]http://www.noprimeleftbehind.net/crus/vstats_new/crustop20.htm#Table61;[/URL] and,  "Top 20 Conjectures with 1k Remaining by Lowest Conjectured k: [URL]http://www.noprimeleftbehind.net/crus/vstats_new/crustop20.htm#Table62[/URL]. I've only spent a few hours looking at the CRUS website so I might not be optimally searching yet. This table looks more comprehensive: [URL]http://www.noprimeleftbehind.net/crus/vstats_new/crusunproven.htm[/URL] I plan to start a 2k's search next, so maybe I'll base it on that table. Anyway, since you bought it up 32*702^n1 has weight 2338, 78*916^n1 has weight 2313. They are both tested to 100k. If you test R702 to 200k the chance of finding a prime and so proving the conjecture is 14.12%. If you test R916 to 200k the chance of finding a prime and so proving the conjecture is 13.43%. My calculations are based on the Prime Number Theory. I posted this method on PrimeGrid 6 months ago. No one's told me I am wrong (or right) yet: [URL]http://www.primegrid.com/forum_thread.php?id=5093[/URL] [URL]http://www.primegrid.com/forum_thread.php?id=4935[/URL] Why don't you add probabilities to the CRUS tables? If your looking for a result rely on probability. If your looking to see how quick you can test a range look at difficulty. Probability does not take account length of time to test an n or tests to be done in a range, just the chance you'll find a great result. 
Oh, OK. I'll take a look at those links later today. Your percentages are in line with what I would expect. We have an "odds of prime" spreadsheet with formulas created by one of the math experts here. I'll check them against that.
I think I misunderstood you previously. With this statement: [quote] R620 has the highest chance, 11.42% [/quote] I thought you were looking for the 1k base with the largest chance of finding a prime by n=200K. But with these statements: [quote] If you test R702 to 200k the chance of finding a prime and so proving the conjecture is 14.12%. If you test R916 to 200k the chance of finding a prime and so proving the conjecture is 13.43%. [/quote] You seem to have concluded otherwise. So we are in agreement that R702 and R916 have a better chance of prime by n=200K. Regardless it doesn't matter to us what base you test. I just wanted you to realize that there are bases with a better chance of prime by n=200K than the one that you chose. Gary 
1 Attachment(s)
I looked at your links and I don't know enough math to verify them one way or another.
According to the odds of prime spreadsheet attached, which I created based on formulas given by one of our math experts here (ID axn), here are the chance of prime percentages that I came up with for a sieve depth of P=5T, which is how far our 3 files have been sieved: Base / # tests / % chance [code] R620 3875 19.5% R702 4896 23.7% R916 5216 24.2% [/code] The spreadsheet only allows entry for an average n, which is not very accurate when the nmax / nmin > ~ 1.5. So what I did was break it up into 10 miniranges, i.e. n=100K110K, 110K120K, etc., to get the expected # of primes of each and add them all up. I'm not sure why you are showing a less chance of prime for R916 vs. R702. With bases this high, the difference in base size has little impact on % chance of finding prime. For instance, if base R702 had 5216 tests like R916 does, R702 would have a 24.9% chance of prime (vs. 24.2% for R916) so you can see there is not a lot of difference in prime chance when a base is only 30% bigger than another one if all other things are equal. Edit: If you are using only a Nash weight to compute your chances of prime, that may explain the problem. Nash weight only works off of a sieve to P=511. Obviously a sieve to P=5T is going to be much more accurate. On our "difficulty" stats, our programming guru, Mark (rogue), uses a sieve to P=1M, which is very clearly accurate enough for determing such a stat. 
For a quick glance everything looks correct with your calculations. I will look to extend the odds of prime spreadsheet at somepoint with your ideas for ranges of n.

[QUOTE=gd_barnes;354538]I'm not sure why you are showing a less chance of prime for R916 vs. R702. With bases this high, the difference in base size has little impact on % chance of finding prime. For instance, if base R702 had 5216 tests like R916 does, R702 would have a 24.9% chance of prime (vs. 24.2% for R916) so you can see there is not a lot of difference in prime chance when a base is only 30% bigger than another one if all other things are equal.
Edit: If you are using only a Nash weight to compute your chances of prime, that may explain the problem. Nash weight only works off of a sieve to P=511. Obviously a sieve to P=5T is going to be much more accurate. On our "difficulty" stats, our programming guru, Mark (rogue), uses a sieve to P=1M, which is very clearly accurate enough for determing such a stat.[/QUOTE] I agree Nash weight, sieving to P=511, is not as accurate as seiving to a deeper depth to determine your weight. The weight is just multiplied by the rest of my equation so you can interchange it with any weight you like. Just have to divide that weight by a factor so it scales the same. The probabilities you show for R620, R702 and R916 are about x1.73 more than my values. That is significant, and would have shown up with all the crunching CRUS has done over the years by now for sure. That makes me go back to my initial assumptions and one I may have made in error is that base 2 has the same density of primes as a randomly chosen set of odd numbers of the same magnitude. I divide the Nash weight by 1751 to give w=1, but maybe I should be dividing by some other value 1751/1.73 = 1012 for instance. I am happy with the rest of the maths. So if I increase the sieve depth (to increase accuracy) and properly scale the weight my equation should be all good. From what you've said the spreadsheet only allows for an average n, so might only be accurate over small ranges requiring many calculations by parts to remain accurate. My equation is the result of an integration, so remains accurate over any range and doesn't require more than one calculation. Plus you can rearrange the equation and solve for other variables, which is really cool. I'll have to go over your odds of prime spreadsheet and see what's going on there. 
I agree that the spreadsheet needs some sort of integration (calculus) so that it is accurate over a wide nrange. I wouldn't begin to claim to now how that might be done, especially in Excel. The spreadsheet was originally designed for a large range of k with millions of candidates over a small range of n. For that it is highly accurate. Even with nmax/nmin = 2, it's not too far off.
The "calculation 1" and "calculation 2" formulas are the key. You may need to contact axn here to ask how he came up with them. There also might be a couple of people here at CRUS who might have insight into how they were derived. 
[QUOTE=gd_barnes;354617]I agree that the spreadsheet needs some sort of integration (calculus) so that it is accurate over a wide nrange. I wouldn't begin to claim to now how that might be done, especially in Excel. The spreadsheet was originally designed for a large range of k with millions of candidates over a small range of n. For that it is highly accurate. Even with nmax/nmin = 2, it's not too far off.[/QUOTE]
Another option is [URL="http://www.mersenneforum.org/showthread.php?p=233457#post233457"]my calcPrimes jar[/URL]. It uses the math behind the spreadsheet, along with simple arithmetic going through each number in the given input file. It should be highly accurate. 
All times are UTC. The time now is 01:58. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2021, Jelsoft Enterprises Ltd.