- **Conjectures 'R Us**
(*https://www.mersenneforum.org/forumdisplay.php?f=81*)

- - **Software/instructions/questions**
(*https://www.mersenneforum.org/showthread.php?t=9742*)

[QUOTE=gd_barnes;202660]I'm confused as to why this is needed. If you use the stop-on-prime option in PFGW or use the new bases script, you should never have any more primes than just the lowest one for each k. There should be no reason to remove any primes.
If it is composite PRPs you are referring to, the newest version of the new bases script handles that by writing them to a separate file and continuing to search the k until a "real" prime is found. If it's composite PRPs for higher n-values, i.e. when you wouldn't be using the new bases script, those are so rare that I've never encountered one. I've never seen a composite PRP for an exponent n>1000; on this project anyway. I strongly recommend against manual editing of files for more than one prime on a k; if that is what you are referring to...much too error-prone.[/QUOTE] I don't know which item you are referring to, but what happens to me is that sometimes I need to shutdown PFGW during a run. If I use PFGW the number_primes option and primes were found, then when I restart, it will not ignore the k for which a prime was already found. |

It removes k's that have PRPs/primes from sieve files and pl_remain.txt files.
It's not useful when only using the PFGW script to start a base. It is useful for running PFGW with the stop-on-prime when PFGW must stop for some reason (and so forget which k's to skip), as rogue said. It's also useful for continuing new bases past what you did with the new bases script. |

Oh, I see. Here is what I do when I have to restart PFGW: Put the previous primes at the top of the sieve file and remove pairs previously tested. That way, it quickly finds them prime and doesn't search the k's anymore. I think Ian and/or Kenneth came up with that idea, which I thought was excellent thinking.
Then it's just a matter of quickly deleting the few duped primes and results from pfgw.log and pfgw.out when the run is done. It's quick and is virtually error free because the duped primes/results are all right in a row and quick to spot. |

[QUOTE=gd_barnes;202754]Oh, I see. Here is what I do when I have to restart PFGW: Put the previous primes at the top of the sieve file and remove pairs previously tested. That way, it quickly finds them prime and doesn't search the k's anymore. I think Ian and/or Kenneth came up with that idea, which I thought was excellent thinking.
Then it's just a matter of quickly deleting the few duped primes and results from pfgw.log and pfgw.out when the run is done. It's quick and is virtually error free because the duped primes/results are all right in a row and quick to spot.[/QUOTE] I hadn't thought of that, but it would certainly work. |

Mark,
I see there is an ongoing discussion that you are involved in in the scripts thread. I initially got a little confused as to what was going on there so I stayed out of it. I think I have it figured out now. I think Tim is coming up with a Perl program that removes k's from the pl_remain.txt file for primes found on subsequent searches above the starting new base script. Personally what I do is plug the file into an Excel spreadsheet into column A, plug in the primes in column B, and then use formulas out to the right to parse out the k-value from each. I then am able to use the k's remaining as a vertical lookup table to see which k's are in the table that contain primes. I then end up with a column that has the k's with primes blanked out. I resort the column with blanked out k's and end up with the k's remaining. That said, although accurate, the above is kind of clunky because it requires manually copying or deleting formulas based on how many k's remaining and manually executing an Excel sort. Having something completely automated like what Tim is working on is pretty cool. Gary |

[quote=gd_barnes;202781]I initially got a little confused as to what was going on there so I stayed out of it. I think I have it figured out now. I think Tim is coming up with a Perl program that removes k's from the pl_remain.txt file for primes found on subsequent searches above the starting new base script.
... That said, although accurate, the above is kind of clunky because it requires manually copying or deleting formulas based on how many k's remaining and manually executing an Excel sort. Having something completely automated like what Tim is working on is pretty cool. Gary[/quote] Yep. It also removes those same k's from your sieve file. (though you can easily comment out one or the other parts, to make it just remove from one of the files) You make it sound like it's still theoretical, but it seems to be working fine to me. I used it for my latest base 3 work and it looks like it's doing it all right. Try it out and tell me if you notice anything wrong. :smile: rogue said he "had problems with my newest script", but I'm not sure what they are (or if he meant the false positive k matching I fixed shortly afterward). Here's the newest version of it, all together (instead of an earlier release and then a one-line "patch"): [URL]http://www.mersenneforum.org/showthread.php?p=202785#post202785[/URL] Note that it uses egrep, which I got from cygwin. It's extremely fast for small files, (eliminating 575 k's from a 1 MB sieve file with 70000 candidates takes a couple of minutes on my PC; anything else is usually done within a couple seconds) and only prints a few status lines. It's also almost completely automated. It just needs to know the file names of your prime file and sieve file (it assumes the remaining k's file is pl_remain.txt). It'd be good if it could decide what to do based on what files you give it instead of assuming it'll have those two files (without modifying the script). Maybe for version 3.2. :smile: |

[quote=Mini-Geek;202788]Yep. It also removes those same k's from your sieve file. (though you can easily comment out one or the other parts, to make it just remove from one of the files)
You make it sound like it's still theoretical, but it seems to be working fine to me. I used it for my latest base 3 work and it looks like it's doing it all right. Try it out and tell me if you notice anything wrong. :smile: rogue said he "had problems with my newest script", but I'm not sure what they are (or if he meant the false positive k matching I fixed shortly afterward). Here's the newest version of it, all together (instead of an earlier release and then a one-line "patch"): [URL]http://www.mersenneforum.org/showthread.php?p=202785#post202785[/URL] Note that it uses egrep, which I got from cygwin. It's extremely fast for small files, (eliminating 575 k's from a 1 MB sieve file with 70000 candidates takes a couple of minutes on my PC; anything else is usually done within a couple seconds) and only prints a few status lines. It's also almost completely automated. It just needs to know the file names of your prime file and sieve file (it assumes the remaining k's file is pl_remain.txt). It'd be good if it could decide what to do based on what files you give it instead of assuming it'll have those two files (without modifying the script). Maybe for version 3.2. :smile:[/quote] Oops, I hate it when miscommunication happens. :smile: I couldn't comment on your Perl program because I haven't even looked at it. What I meant was MY process of removing k's from the pl_remain file using Excel is klunky. (Actually, I think klunky is an understatement!) See my references in my last para. to adding/deleting formulas and an Excel sort. I'm pretty sure your Perl program beats that all to heck, especially if you've gotten it to work for huge #'s of k's on base 3. :-) I'll give it a whirl in the near future. Gary |

[QUOTE=gd_barnes;202754]Oh, I see. Here is what I do when I have to restart PFGW: Put the previous primes at the top of the sieve file and remove pairs previously tested. That way, it quickly finds them prime and doesn't search the k's anymore. I think Ian and/or Kenneth came up with that idea, which I thought was excellent thinking.
Then it's just a matter of quickly deleting the few duped primes and results from pfgw.log and pfgw.out when the run is done. It's quick and is virtually error free because the duped primes/results are all right in a row and quick to spot.[/QUOTE] If it was my idea, I has purely forgotten it, even though it has quite a resemblance towards my "making of k's remaining list", but anyway it came in handy, since my dad blew one of the fuses today, so a lot of work could have been lost on the Sierp Base 63 conjecture :smile: Thanks again for saving me a bunch of time redoing millions of test or finding duplicate primes. Regards KEP |

Is there an easy(!) way to calculate the optimum size of range to sieve to have a 0.x probability of finding the last remaining k*b^n+-1 (of weight w) for a certain conjecture?
(Sieved to optimum depth.) |

[quote=Flatlander;219424]Is there an easy(!) way to calculate the optimum size of range to sieve to have a 0.x probability of finding the last remaining k*b^n+-1 (of weight w) for a certain conjecture?
(Sieved to optimum depth.)[/quote] There is no easy way although I can do it with a hard way. :-) See below. This is a great question that I have thought of in the past but had not pursued. There is no way that I know of just based on weight since weight is not always 100% correlated to candidates remaining after sieving to higher depths. But it can be done based on # of candidates remaining. Here is what I would do: 1. Sieve some monsterous range that you are sure will have AT LEAST your 0.x probably of a prime to something nominal such at P=1G. (Probably a ratio of minimum n to maximum n of 10 should be sufficient.) 2. Set a running cumulative non-chance of prime to 1; we'll call this N. (To be explained more below.) 3. Repeat the following 4 steps until your chance of prime (we'll call it C) is > than your 0.x probability of finding a prime. a. Plug an n=100K range into the odds of prime spreadsheet and make a note of the chance of prime. So with the 1st go around, if sieving n=100K to 2M, plug in the applicable info. for the # of candidates for n=100K-200K. For the n-value, use a little less than the average of the range; let's say n=140K. For the 2nd go around, do the same for n=200K-300K and use n=240K for the n-value, etc. b. Subtract the chance of prime in the spreadsheet from 1. We'll call this P. It is your chance of NOT finding a prime for this particular range. c. Multiply N by P giving N. This is your cumulative chance of there NOT being a prime. With the first go around, since N=1 before doing the multiplication, N will just be equal to P after the multiplication. d. Subtract N from 1 and call it C. (Don't change the value of N.) C is your chance of prime to this point. If C > your 0.x probability of a prime, then you are done. This was done quickly so if Tim or some other stats guys who are familiar with the odds of prime spreadsheet would like to check it, I'd welcome that. IMPORTANT: Don't try to simplify this by adding up the expected # of primes for each range. As an example, if your expected # of primes is 1.00, then you actually only have about a 62-63% chance of finding a prime. Some people make the fallacy of assuming that since the expected # of primes is 0.6, then there is a 60% chance of prime. That is incorrect. It can be easily demonstrated in the 1.00 expected primes example. That is: There is no way that there is a 100% chance of prime just because there should be an average expected 1.00 primes. If the expected is 1 primes, the chance is actually (I believe) the Golden Ratio, which is the ratio of consecutive large Fibonacci numbers, i.e. 0.62-0.63 somewhere. This also applies to betting at just about anything. In roulette, on an Americal wheel, there are 38 numbers. If you pick any 1 of those numbers at random on a "fair" wheel, there is a 62-63% chance that the # will hit at least once in the next 38 spins. Gary |

lol.
Well I think I followed [I]most [/I]of that. Sounds to me that it can be automated by adding more columns to the odds of prime spreadsheet and working left to right filling in the data until C> 0.x is flagged in a cell.(?) Worth doing? As doubling a sieve size increases the time taken by sqr(2), (iirc) I'm assuming there is a best-bangs-for-bucks sweet spot that would clear some of the 1k-left conjectures in the shortest possible time by aiming for a certain 0.x probability. (I hope that is understandable. I have real difficulty in 'keeping more than one plate spinning' in my head at the same time. It's frustrating to find this stuff so enjoyable but so hard at the same time. :jail: ) |

All times are UTC. The time now is 22:24. |

Powered by vBulletin® Version 3.8.11

Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.