![]() |
![]() |
#1 | |
"Gary"
May 2007
Overland Park, KS
22×2,963 Posts |
![]() Quote:
It would be very interesting to compute such a ratio for ALL k's, at least in the k=300-1001 range. I'm speculating that high-weight k's will yield a slightly better ratio of primes per candidate tested for the same sieve depth. There's no specific math that I'm aware of that I could use to prove it or even conjecture it so it's only a speculation at this point. I'm guessing that to be the case because if there are fewer smaller factors, which is the case in high-weight vs. low-weight k's, I'm also guessing that there will be fewer higher factors, i.e. factors above the sieve limit. This means there should be a slightly higher chance of prime per candidate tested at the same sieve depth. That is part of the point of this entire project. That is to see if there are "patterns" of primes or at least patterns of higher-density prime "areas" in the Riesel primes universe. Hence why we try to fill in all of the holes and double-check everything. Gary Last fiddled with by gd_barnes on 2009-07-29 at 20:15 Reason: admin edit |
|
![]() |
![]() |
![]() |
#2 | |
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
10000101101112 Posts |
![]() Quote:
While we're on the subject, does anyone know where that thread was I posted in a while back about covering sets and how you know that z is a factor when n==x mod y? (or whatever letters you prefer ![]() Last fiddled with by TimSorbet on 2009-07-24 at 15:29 |
|
![]() |
![]() |
![]() |
#3 |
"Gary"
May 2007
Overland Park, KS
22·2,963 Posts |
![]()
Although I have no proof, I still maintain that fewer lower factors can POSSIBLY lead to fewer higher factors also. Case in point that you would be quite familiar with: Mersenne forms. Since factors can only be of the form f = 2kn+1 where f must be 1 or 7 mod 8, then there are far fewer than normal possible larger factors. For that reason, if you did a "regular" sieve on Mersenne forms where n is prime to a specific depth like you did any other k-value, you'd find a much higher density of primes per remaining candidate at the same sieve depth.
The same type of situation exists for GFN's, i.e. b^(2^q)+1. Factors have to be of the form 2k*2^q+1. Hence these factors are even lower in density than Mersenne factors. I believe that such situations exists on forms other than just Mersenne's and GFN's. To take that statement further, I believe it may occur on all forms, i.e. any k value for Riesel or Proth primes. We just don't have an easy way to demonstrate that fewer small factors means fewer larger factors because we don't have specific known forms of the factors. You are correct, the weight of a k is directly related to the number of smaller factors. It is based on the # of candidates remaining on a sieve to P=1M or 10M or something like that. Karsten may be able to answer it more specifically. Gary Last fiddled with by gd_barnes on 2009-07-24 at 20:40 |
![]() |
![]() |
![]() |
#4 | |
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
11·389 Posts |
![]() Quote:
I don't know of any good way of going about calculating the ratio of expected vs actual primes in relations to weights. Sounds like a job for Karsten, the data master. ![]() Last fiddled with by TimSorbet on 2009-07-24 at 21:19 |
|
![]() |
![]() |
![]() |
#5 |
Mar 2006
Germany
3·7·11·13 Posts |
![]()
what i can do is calculate the ratio of nash weight against found primes in relation of the searched range.
perhaps like (#primes)*(nash)/(range) so i got for k=1: 47*925/26.18 -> ~1660 k=3: 57*2976/5,0 -> ~33926 k=1125: 108*4817/0,5 -> 1040472 any other suggestions of how to do this? |
![]() |
![]() |
![]() |
#6 | |
I quite division it
"Chris"
Feb 2005
England
31×67 Posts |
![]() Quote:
Has k<300 already been computed? Sounds fun and I'm sure you would enjoy procrastinating, I mean, making a thorough analysis of all that data. ![]() edit: How high would n and p need to be? Last fiddled with by Flatlander on 2009-07-25 at 11:59 |
|
![]() |
![]() |
![]() |
#7 | |
"Gary"
May 2007
Overland Park, KS
22×2,963 Posts |
![]() Quote:
No! There are 3 errors in the formula and one error in the # of primes. 1st, you need to multiply by the range, 2nd, you need to divide by the # of primes, and 3rd, you need to take the square root of the range. On the # of primes, you should only use those within the range that has been fully contiguously searched. This leaves: sqrt (range) * nash / # of primes. Explanations: (1) We want a formula that gives a value that approximates how many "work units" must be searched to find a prime hence we must multiply by weight and divide by primes. This should come out as reasonably close for all k's of similar weights. (2) The # of primes scales with the square of the n-range searched so you must take the square root of the range to normalize the range searched. (3) k=1 has all of its primes included. It should only include primes up to n=26.18M. If done correctly, the lower the ending value, the more prime dense the k-value is. Using the corrected formula and # of primes, it would be: k=1: sqrt(26.18)*925/42 = ~112.69 k=3: sqrt(5.0)*2976/57 = ~116.75 k=1125: sqrt(0.5)*4817/108 = ~31.54 Using the new formula, yes, I think this would be a very good way to do it. Keep in mind that a lower "rating value" means a more prime dense k-value per work unit searched. These ending values are nothing more than "prime density ratings" and cannot really be associated with anything. The # of candidates searched per prime in a sieve file will always increase as the n-range gets higher assuming a reasonably optimal sieve depth across all n-ranges. This looks much better and seems to tentatively demonstrate that a very high-weight k outperforms lower weight k's as for primes found per candidate searched. But I believe there is an exception: Mersenne primes. For as low weight as they are, I think they outperform all other k's for their weight and even for k's up to more than triple their weight due to their low density of possible factors. In other words, since k=1 has a Nash weight of 925, I think if you average these ratings for all k's with a weight of about 850 to 1000, you will come up with an average rating much higher than you will for k=1. Perhaps an average of more like 150-200 or higher. I believe there are other such k's like this. More research is needed. Karsten or anyone else, feel free to take a hack at it for all k=1-100 as well as some very low or very high weight k's > 100. That should give us something more definite to chew on. Gary Last fiddled with by gd_barnes on 2009-07-27 at 05:44 |
|
![]() |
![]() |
![]() |
#8 |
"Gary"
May 2007
Overland Park, KS
22×2,963 Posts |
![]()
I confused myself and made an error in the previously mentioned formula. The # of primes scales with the logarithm of the n-range not the square of the n-range. (duh!)
![]() k=1: log(26.18M)*925/42 = 163.37 k=3: log(5M)*2976/57 = 349.76 (likely slightly better than average for the weight; see avg. for 3600-4000 weights below) k=1125: log(500K)*4817/108 = 254.18 (excellent but one is better amongst non-Mersenne's k< 1000; see list below) Mersenne's look very good now! To make sure that high-weight k's of widely varying search depths come in at around the same average (to finally show that this formula is correct), I took all k's < 1000 that had a Nash weight of > 3600 to see if they came in at around the same rating as k=3 and others searched to n=1M or higher. Here is a list: k=45: log(1.3M)*3747/70 = 327.27 k=165: log(1M)*3669/74 = 297.49 k=195: log(920K)*4106/79 = 309.97 k=315: log(940K)*3794/78 = 290.54 k=405: log(700K)*4100/68 = 352.43 k=465: log(700K)*4852/96 = 295.42 k=525: log(700K)*4840/76 = 372.24 k=615: log(700K)*4548/78 = 340.81 k=675: log(700K)*4141/70 = 345.78 k=705: log(700K)*4022/72 = 326.51 k=735: log(700K)*4070/101 = 235.54 (!!!) k=741: log(700K)*5029/98 = 299.95 k=765: log(700K)*3809/61 = 364.98 k=825: log(700K)*4346/81 = 313.61 k=843: log(700K)*3709/68 = 318.82 k=861: log(700K)*4793/101 = 277.38 k=885: log(700K)*4527/83 = 318.80 k=945: log(700K)*4135/77 = 313.89 k=999: log(700K)*3645/50 = 426.11 (boo!) (Note: Lower rating = more prime dense k-value.) Weighted average for Nash weight 3600-4000 (6 k's) = 331.82; Nash weight 4000-4400 (7 k's) = 309.21; weight 4400+ (6 k's) = 314.44. Overall weighted average is 317.72. The 4000-4400 weighted k's were helped by the oustanding k=735. This isn't a definitive list. I may have missed some. I just quickly scanned Karsten's pages. I did take into account higher searched ranges than what he shows. I also did Chris's and now Lennart's k=3045 where Chris had thought it was one of the best k's that he has worked on as for candidates searched per prime. Assuming that Lennart finds no more primes to n=1M, here it is: k=3045: log(1M)*3822/80 = 286.65 This is an excellent rating for a k with a weight of just 3822 and is the best for all k's above with a weight < 4000 (avg. is 331.82) and 3rd best when compared to all k's < 1000 and 4th best if you count k=1125, which has the extreme weight > 4800! Chris, you were spot on when you said that it was doing very well on primes found per candidate searched! ![]() While k=1125 is good and has the most # of primes found so far for all k's < 10K, with a weight of 4817 and rating of 254.18, it has nothing on the amazing k=735 that has a much lower weight of 4070 but a better rating of 235.54. If you assume that higher weighted k's should have somewhat better ratings (what I am attempting to eventually prove), it's even more remarkable. As for k=3, I believe its weak rating is due to the fact that it is the lowest weight amongst all of the k's shown above with the exception of Mersenne's. This is clearly not a statistically significant sampling taken by themselves (i.e. k=1125 may end up being very average for its weight if it could be easily searched to n=10M or 100M). What we need to do is make a list of k's with low weights, perhaps with weights < 500, as well as some medium weights, and see if their collective average rating is much different than those with high weights. We'd need a lot more of the low weights than the above because the # of primes is so much less and so the variability of the ratings will be much greater. We need a larger sampling for statistical significance when the # of occurrences of what we're comparing is much lower. One thing that it does definitively show: Mersenne's rule the world! To make sure that nothing was being skewed by the formula as it relates to the search range, I did another calculation only using Mersenne's searched to n=1M. Here it is: k=1; log(1M)*925/33 = 168.18 This is so far better than anything else that it is clearly statisically significant that the low # of possible factors of Mersenne's causes them to be by far the best choice to search for any specific sieve depth. To demonstrate it further, I checked all k<1000 with weight<1000 to see if any had at least 33 primes at n=1M. There are none! k=121/weight 965, k=157/weight 917, and k=541/weight 954 came close with 28 primes. The 1st 2 are at n>=1M but k=541 still needs to be searched for n=~700K-1M. For some reason, I don't think it will pick up 5 primes in that range. :-) Mersenne's are most certainly in a league of their own. Alas, too bad we can't complete a single test in < 15-20 days. ![]() I'd be curious to know if anyone can find a k anywhere with a Nash weight < 1000 that has more than 33 primes at n=1M. If they did, it wouldn't mean it was better than Mersenne's because if you search 100's or 1000's of k's with such a condition, you're likely to find one that got lucky. The key would be, could it continue such that it would have 38 primes at n=10M and 42 primes at n=26.18M. Doubtful. Gary Last fiddled with by gd_barnes on 2009-07-29 at 00:11 |
![]() |
![]() |
![]() |
#9 |
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
11×389 Posts |
![]()
Could you give us a table of the data you've got and a graph of Nash weight vs. "Barnes prime density" for all you calculated?
Last fiddled with by TimSorbet on 2009-07-27 at 12:26 |
![]() |
![]() |
![]() |
#10 |
"Gary"
May 2007
Overland Park, KS
2E4C16 Posts |
![]()
The above is all that I calculated. I just used the Nash weight, search range and # of primes from Karsten's pages (or higher known search range). If anyone does any calculations, be sure and use only the # of primes within the contiguously searched range as shown on his page.
Last fiddled with by gd_barnes on 2009-07-27 at 13:22 |
![]() |
![]() |
![]() |
#11 | |
"Gary"
May 2007
Overland Park, KS
271148 Posts |
![]() Quote:
I was overly wordy as usual in my posting about candidates to primes "ratings". Chris, did you happen to see this comment above about k=3045? I thought that it was quite amazing that you had come up with such a statement as the one in your last posting without doing any specific detailed analysis on k's. Last fiddled with by gd_barnes on 2009-07-29 at 00:12 |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Prime Density Correction Terms? | wblipp | Math | 20 | 2011-09-07 21:45 |
Asymptotic density of k-almost primes | CRGreathouse | Math | 1 | 2010-08-22 23:47 |
Density of Mersenne divisors | Random Poster | Math | 1 | 2008-12-15 01:14 |
prime density vs. sieving vs. prp time | jasong | Math | 18 | 2006-03-31 03:14 |
Small range with high density of factors | hbock | Lone Mersenne Hunters | 1 | 2004-03-07 19:51 |