mersenneforum.org  

Go Back   mersenneforum.org > Prime Search Projects > No Prime Left Behind

Reply
 
Thread Tools
Old 2009-07-24, 01:00   #1
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

7·1,453 Posts
Default Prime k-value density rating and factorizations

Quote:
Originally Posted by Flatlander View Post
iirc, I found that k=3045 gave the best primes to candidates ratio of all the ks I tested over a couple of years. No nasty huge gaps either.

I guess 2 more below n=1m.
Admin edit: This post begins an interesting discussion about prime density ratings for k values that started in another thread and has now been moved here. Such ratings attempt to determine if there are specific k's or groups of similarly weighted k's that give a higher primes to candidates remaining percentage than others. It is virtually a proven fact that k=1 (Mersenne forms) give the highest density of primes per candidate remaining to a specific sieve depth due to the specific form that the factors must have.

It would be very interesting to compute such a ratio for ALL k's, at least in the k=300-1001 range. I'm speculating that high-weight k's will yield a slightly better ratio of primes per candidate tested for the same sieve depth. There's no specific math that I'm aware of that I could use to prove it or even conjecture it so it's only a speculation at this point.

I'm guessing that to be the case because if there are fewer smaller factors, which is the case in high-weight vs. low-weight k's, I'm also guessing that there will be fewer higher factors, i.e. factors above the sieve limit. This means there should be a slightly higher chance of prime per candidate tested at the same sieve depth.

That is part of the point of this entire project. That is to see if there are "patterns" of primes or at least patterns of higher-density prime "areas" in the Riesel primes universe. Hence why we try to fill in all of the holes and double-check everything.


Gary

Last fiddled with by gd_barnes on 2009-07-29 at 20:15 Reason: admin edit
gd_barnes is online now   Reply With Quote
Old 2009-07-24, 15:28   #2
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

426710 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
It would be very interesting to compute such a ratio for ALL k's, at least in the k=300-1001 range. I'm speculating that high-weight k's will yield a slightly better ratio of primes per candidate tested for the same sieve depth. There's no specific math that I'm aware of that I could use to prove it or even conjecture it so it's only a speculation at this point.

I'm guessing that to be the case because if there are fewer smaller factors, which is the case in high-weight vs. low-weight k's, I'm also guessing that there will be fewer higher factors, i.e. factors above the sieve limit. This means there should be a slightly higher chance of prime per candidate tested at the same sieve depth.

That is part of the point of this entire project. That is to see if there are "patterns" of primes or at least patterns of higher-density prime "areas" in the Riesel primes universe. Hence why we try to fill in all of the holes and double-check everything.


Gary
If weights are due to the partial covering sets of k's, (e.g. many k's have all even or all odd n's removed) then that should have no bearing on whether there are larger factors. Is that what "causes" weights, or is there some other reason? If we ignore the small factors that can be shown to always be present for n==x mod y, are there further differences based on the weights?

While we're on the subject, does anyone know where that thread was I posted in a while back about covering sets and how you know that z is a factor when n==x mod y? (or whatever letters you prefer ) IIRC it was in the CRUS forum and began right after someone figured out a covering set for some numbers.

Last fiddled with by Mini-Geek on 2009-07-24 at 15:29
Mini-Geek is offline   Reply With Quote
Old 2009-07-24, 20:38   #3
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

7·1,453 Posts
Default

Although I have no proof, I still maintain that fewer lower factors can POSSIBLY lead to fewer higher factors also. Case in point that you would be quite familiar with: Mersenne forms. Since factors can only be of the form f = 2kn+1 where f must be 1 or 7 mod 8, then there are far fewer than normal possible larger factors. For that reason, if you did a "regular" sieve on Mersenne forms where n is prime to a specific depth like you did any other k-value, you'd find a much higher density of primes per remaining candidate at the same sieve depth.

The same type of situation exists for GFN's, i.e. b^(2^q)+1. Factors have to be of the form 2k*2^q+1. Hence these factors are even lower in density than Mersenne factors.

I believe that such situations exists on forms other than just Mersenne's and GFN's. To take that statement further, I believe it may occur on all forms, i.e. any k value for Riesel or Proth primes. We just don't have an easy way to demonstrate that fewer small factors means fewer larger factors because we don't have specific known forms of the factors.

You are correct, the weight of a k is directly related to the number of smaller factors. It is based on the # of candidates remaining on a sieve to P=1M or 10M or something like that. Karsten may be able to answer it more specifically.


Gary

Last fiddled with by gd_barnes on 2009-07-24 at 20:40
gd_barnes is online now   Reply With Quote
Old 2009-07-24, 21:17   #4
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

17×251 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
Although I have no proof, I still maintain that fewer lower factors can POSSIBLY lead to fewer higher factors also. Case in point that you would be quite familiar with: Mersenne forms. Since factors can only be of the form f = 2kn+1 where f must be 1 or 7 mod 8, then there are far fewer than normal possible larger factors. For that reason, if you did a "regular" sieve on Mersenne forms where n is prime to a specific depth like you did any other k-value, you'd find a much higher density of primes per remaining candidate at the same sieve depth.
A question critical to your reasoning is: is the ratio of Mersenne primes to Mersenne numbers 2^n-1 (note not 2^p-1, I think if we exclude composite n we're going to throw the number way off) higher than the ratio of Riesel primes to Riesel numbers k*2^n-1 or not? I don't know the answer to this. If the ratio is about the same, and not drastically higher, then the fewer number of possible factors says nothing about the presence of a factor.

I don't know of any good way of going about calculating the ratio of expected vs actual primes in relations to weights. Sounds like a job for Karsten, the data master.

Last fiddled with by Mini-Geek on 2009-07-24 at 21:19
Mini-Geek is offline   Reply With Quote
Old 2009-07-24, 23:49   #5
kar_bon
 
kar_bon's Avatar
 
Mar 2006
Germany

2×3×11×43 Posts
Default

what i can do is calculate the ratio of nash weight against found primes in relation of the searched range.

perhaps like (#primes)*(nash)/(range)

so i got for
k=1: 47*925/26.18 -> ~1660
k=3: 57*2976/5,0 -> ~33926
k=1125: 108*4817/0,5 -> 1040472

any other suggestions of how to do this?
kar_bon is offline   Reply With Quote
Old 2009-07-25, 11:58   #6
Flatlander
I quite division it
 
Flatlander's Avatar
 
"Chris"
Feb 2005
England

207710 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
It would be very interesting to compute such a ratio for ALL k's, at least in the k=300-1001 range.Gary
Is this something that is worth our doing? (A seperate thread with reservations and an agreed sieve limit.)
Has k<300 already been computed?

Sounds fun and I'm sure you would enjoy procrastinating, I mean, making a thorough analysis of all that data.

edit: How high would n and p need to be?

Last fiddled with by Flatlander on 2009-07-25 at 11:59
Flatlander is offline   Reply With Quote
Old 2009-07-27, 05:33   #7
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

236738 Posts
Default

Quote:
Originally Posted by kar_bon View Post
what i can do is calculate the ratio of nash weight against found primes in relation of the searched range.

perhaps like (#primes)*(nash)/(range)

so i got for
k=1: 47*925/26.18 -> ~1660
k=3: 57*2976/5,0 -> ~33926
k=1125: 108*4817/0,5 -> 1040472

any other suggestions of how to do this?

No! There are 3 errors in the formula and one error in the # of primes. 1st, you need to multiply by the range, 2nd, you need to divide by the # of primes, and 3rd, you need to take the square root of the range. On the # of primes, you should only use those within the range that has been fully contiguously searched. This leaves:

sqrt (range) * nash / # of primes.

Explanations: (1) We want a formula that gives a value that approximates how many "work units" must be searched to find a prime hence we must multiply by weight and divide by primes. This should come out as reasonably close for all k's of similar weights. (2) The # of primes scales with the square of the n-range searched so you must take the square root of the range to normalize the range searched. (3) k=1 has all of its primes included. It should only include primes up to n=26.18M. If done correctly, the lower the ending value, the more prime dense the k-value is.

Using the corrected formula and # of primes, it would be:

k=1: sqrt(26.18)*925/42 = ~112.69
k=3: sqrt(5.0)*2976/57 = ~116.75
k=1125: sqrt(0.5)*4817/108 = ~31.54

Using the new formula, yes, I think this would be a very good way to do it. Keep in mind that a lower "rating value" means a more prime dense k-value per work unit searched. These ending values are nothing more than "prime density ratings" and cannot really be associated with anything. The # of candidates searched per prime in a sieve file will always increase as the n-range gets higher assuming a reasonably optimal sieve depth across all n-ranges.

This looks much better and seems to tentatively demonstrate that a very high-weight k outperforms lower weight k's as for primes found per candidate searched. But I believe there is an exception: Mersenne primes. For as low weight as they are, I think they outperform all other k's for their weight and even for k's up to more than triple their weight due to their low density of possible factors. In other words, since k=1 has a Nash weight of 925, I think if you average these ratings for all k's with a weight of about 850 to 1000, you will come up with an average rating much higher than you will for k=1. Perhaps an average of more like 150-200 or higher. I believe there are other such k's like this.

More research is needed. Karsten or anyone else, feel free to take a hack at it for all k=1-100 as well as some very low or very high weight k's > 100. That should give us something more definite to chew on.


Gary

Last fiddled with by gd_barnes on 2009-07-27 at 05:44
gd_barnes is online now   Reply With Quote
Old 2009-07-27, 07:29   #8
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

27BB16 Posts
Default

I confused myself and made an error in the previously mentioned formula. The # of primes scales with the logarithm of the n-range not the square of the n-range. (duh!) In doing that, in order to avoid negative ratings, we need to use the actual n-range searched instead of the n-range/1M. Correcting for that error, we have:

k=1: log(26.18M)*925/42 = 163.37
k=3: log(5M)*2976/57 = 349.76 (likely slightly better than average for the weight; see avg. for 3600-4000 weights below)
k=1125: log(500K)*4817/108 = 254.18 (excellent but one is better amongst non-Mersenne's k< 1000; see list below)

Mersenne's look very good now! To make sure that high-weight k's of widely varying search depths come in at around the same average (to finally show that this formula is correct), I took all k's < 1000 that had a Nash weight of > 3600 to see if they came in at around the same rating as k=3 and others searched to n=1M or higher. Here is a list:

k=45: log(1.3M)*3747/70 = 327.27
k=165: log(1M)*3669/74 = 297.49
k=195: log(920K)*4106/79 = 309.97
k=315: log(940K)*3794/78 = 290.54
k=405: log(700K)*4100/68 = 352.43
k=465: log(700K)*4852/96 = 295.42
k=525: log(700K)*4840/76 = 372.24
k=615: log(700K)*4548/78 = 340.81
k=675: log(700K)*4141/70 = 345.78
k=705: log(700K)*4022/72 = 326.51
k=735: log(700K)*4070/101 = 235.54 (!!!)
k=741: log(700K)*5029/98 = 299.95
k=765: log(700K)*3809/61 = 364.98
k=825: log(700K)*4346/81 = 313.61
k=843: log(700K)*3709/68 = 318.82
k=861: log(700K)*4793/101 = 277.38
k=885: log(700K)*4527/83 = 318.80
k=945: log(700K)*4135/77 = 313.89
k=999: log(700K)*3645/50 = 426.11 (boo!)

(Note: Lower rating = more prime dense k-value.)

Weighted average for Nash weight 3600-4000 (6 k's) = 331.82; Nash weight 4000-4400 (7 k's) = 309.21; weight 4400+ (6 k's) = 314.44. Overall weighted average is 317.72. The 4000-4400 weighted k's were helped by the oustanding k=735.

This isn't a definitive list. I may have missed some. I just quickly scanned Karsten's pages. I did take into account higher searched ranges than what he shows.

I also did Chris's and now Lennart's k=3045 where Chris had thought it was one of the best k's that he has worked on as for candidates searched per prime. Assuming that Lennart finds no more primes to n=1M, here it is:

k=3045: log(1M)*3822/80 = 286.65

This is an excellent rating for a k with a weight of just 3822 and is the best for all k's above with a weight < 4000 (avg. is 331.82) and 3rd best when compared to all k's < 1000 and 4th best if you count k=1125, which has the extreme weight > 4800! Chris, you were spot on when you said that it was doing very well on primes found per candidate searched!

While k=1125 is good and has the most # of primes found so far for all k's < 10K, with a weight of 4817 and rating of 254.18, it has nothing on the amazing k=735 that has a much lower weight of 4070 but a better rating of 235.54. If you assume that higher weighted k's should have somewhat better ratings (what I am attempting to eventually prove), it's even more remarkable. As for k=3, I believe its weak rating is due to the fact that it is the lowest weight amongst all of the k's shown above with the exception of Mersenne's.

This is clearly not a statistically significant sampling taken by themselves (i.e. k=1125 may end up being very average for its weight if it could be easily searched to n=10M or 100M). What we need to do is make a list of k's with low weights, perhaps with weights < 500, as well as some medium weights, and see if their collective average rating is much different than those with high weights. We'd need a lot more of the low weights than the above because the # of primes is so much less and so the variability of the ratings will be much greater. We need a larger sampling for statistical significance when the # of occurrences of what we're comparing is much lower.

One thing that it does definitively show: Mersenne's rule the world! To make sure that nothing was being skewed by the formula as it relates to the search range, I did another calculation only using Mersenne's searched to n=1M. Here it is:

k=1; log(1M)*925/33 = 168.18

This is so far better than anything else that it is clearly statisically significant that the low # of possible factors of Mersenne's causes them to be by far the best choice to search for any specific sieve depth. To demonstrate it further, I checked all k<1000 with weight<1000 to see if any had at least 33 primes at n=1M. There are none! k=121/weight 965, k=157/weight 917, and k=541/weight 954 came close with 28 primes. The 1st 2 are at n>=1M but k=541 still needs to be searched for n=~700K-1M. For some reason, I don't think it will pick up 5 primes in that range. :-) Mersenne's are most certainly in a league of their own. Alas, too bad we can't complete a single test in < 15-20 days.

I'd be curious to know if anyone can find a k anywhere with a Nash weight < 1000 that has more than 33 primes at n=1M. If they did, it wouldn't mean it was better than Mersenne's because if you search 100's or 1000's of k's with such a condition, you're likely to find one that got lucky. The key would be, could it continue such that it would have 38 primes at n=10M and 42 primes at n=26.18M. Doubtful.


Gary

Last fiddled with by gd_barnes on 2009-07-29 at 00:11
gd_barnes is online now   Reply With Quote
Old 2009-07-27, 12:26   #9
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

17·251 Posts
Default

Could you give us a table of the data you've got and a graph of Nash weight vs. "Barnes prime density" for all you calculated?

Last fiddled with by Mini-Geek on 2009-07-27 at 12:26
Mini-Geek is offline   Reply With Quote
Old 2009-07-27, 13:21   #10
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

7·1,453 Posts
Default

The above is all that I calculated. I just used the Nash weight, search range and # of primes from Karsten's pages (or higher known search range). If anyone does any calculations, be sure and use only the # of primes within the contiguously searched range as shown on his page.

Last fiddled with by gd_barnes on 2009-07-27 at 13:22
gd_barnes is online now   Reply With Quote
Old 2009-07-29, 00:07   #11
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

7×1,453 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
Chris, you were spot on when you said that it was doing very well on primes found per candidate searched!
Gary

I was overly wordy as usual in my posting about candidates to primes "ratings". Chris, did you happen to see this comment above about k=3045? I thought that it was quite amazing that you had come up with such a statement as the one in your last posting without doing any specific detailed analysis on k's.

Last fiddled with by gd_barnes on 2009-07-29 at 00:12
gd_barnes is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Prime Density Correction Terms? wblipp Math 20 2011-09-07 21:45
Asymptotic density of k-almost primes CRGreathouse Math 1 2010-08-22 23:47
Density of Mersenne divisors Random Poster Math 1 2008-12-15 01:14
prime density vs. sieving vs. prp time jasong Math 18 2006-03-31 03:14
Small range with high density of factors hbock Lone Mersenne Hunters 1 2004-03-07 19:51

All times are UTC. The time now is 03:32.

Tue Aug 4 03:32:50 UTC 2020 up 17 days, 23:19, 0 users, load averages: 1.71, 1.57, 1.53

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.