![]() |
Pick Your Poisson: P-1 Bounds
My old friend, P-1 bounds, is tasking me again. I have an exponent in the area of 99,8xx,xxx which I am currently running. Below are the apparent default bounds based on the [I]worktodo[/I] line:
[QUOTE]Prime95: 710,000 and 11,182,500. James' probability calculator at 2.5%: 720,000 and 13,680,000 CUDAPm1: 910,000 and 18,427,500 gpuOwl: 1,000,000 and 30,000,000[/QUOTE]I suppose what I am looking for here is a rule-of-thumb. Which should I use? Being a long-used and trusted application, I am using what [I]Prime95[/I] suggests in [I]gpuOwl's[/I] config file. I am wondering is this enough; should they be larger. I really do not know. |
[QUOTE=storm5510;552663]My old friend, P-1 bounds, is tasking me again. I have an exponent in the area of 99,8xx,xxx which I am currently running. Below are the apparent default bounds based on the [I]worktodo[/I] line:
I suppose what I am looking for here is a rule-of-thumb. Which should I use? Being a long-used and trusted application, I am using what [I]Prime95[/I] suggests in [I]gpuOwl's[/I] config file. I am wondering is this enough; should they be larger. I really do not know.[/QUOTE] The faster P-1 runs compared to LL the larger the bounds should be. |
[QUOTE=storm5510;552663]My old friend, P-1 bounds, is tasking me again. I have an exponent in the area of 99,8xx,xxx which I am currently running. Below are the apparent default bounds based on the [I]worktodo[/I] line:
I suppose what I am looking for here is a rule-of-thumb. Which should I use? Being a long-used and trusted application, I am using what [I]Prime95[/I] suggests in [I]gpuOwl's[/I] config file. I am wondering is this enough; should they be larger. I really do not know.[/QUOTE] I am just revisiting the P-1 probabilities. Based on my intuition (which is based on a few articles, among others [url]https://www.researchgate.net/publication/220576644_Asymptotic_semismoothness_probabilities[/url] which has a handy table on page 12); The idea is that there is a lot of benefit from a very large B2. Either use the default GpuOwl bounds of 1M/30M, or lower the B1 to a value of choice between 500K and 1M, but keep B2 to 30M. Recently PRP got "cheaper" (so P-1 got relativelly "more expensive"), so the P-1 bounds need not be increased if efficiency is the goal. That's why I suggested "lower B1" instead of "increase B2". (the P-1 probability calculators we have now may be a bit off) |
[QUOTE=preda;552829]I am just revisiting the P-1 probabilities.
Based on my intuition (which is based on a few articles, among others [URL]https://www.researchgate.net/publication/220576644_Asymptotic_semismoothness_probabilities[/URL] which has a handy table on page 12); The idea is that there is a lot of benefit from a very large B2. Either use the default GpuOwl bounds of 1M/30M, or lower the B1 to a value of choice between 500K and 1M, but keep B2 to 30M. Recently PRP got "cheaper" (so P-1 got relativelly "more expensive"), so the P-1 bounds need not be increased if efficiency is the goal. That's why I suggested "lower B1" instead of "increase B2". (the P-1 probability calculators we have now may be a bit off)[/QUOTE] Cost. This has to me a measurement of time. I am not sure what else it could be. I looked a George's "Math" page on [I]mersenne.org[/I]. Specifically, P-1. It is not written in a way which I can understand. Still, I tried. Somebody here told me a couple of years ago that multiple factors can be found between 0 (zero) and B1, but only one above B1. Would lowering B1 not decrease the possibility? I suppose what I have been looking for is a relationship between bound sizes and powers of two. It seems like there would be one somewhere. About a yer ago, I found a 39-digit factor in the process of running a really small exponent using a B1 of 1,000,000. By the way. I decided to give my antique Core2Duo a shot with a PRP-CF by running it with [I]Prime95[/I]. Only 11 hours each. I was expecting much more. :grin: |
Use big enough bounds that PrimeNet will retire the need for P-1 from the exponent.
Otherwise, unless you find a factor, it's wasted cycles that someone else will have to duplicate in their run to big enough bounds to retire the P-1 need for the same exponent. The PrimeNet bounds on mersenne.ca are sufficient. |
[QUOTE=kriesel;552876]Use big enough bounds that PrimeNet will retire the need for P-1 from the exponent...[/QUOTE]
I have decided to pull away from running any more P-1's for a while. From what I gather and in the light of Prime95 v30.x being available, there will be some reevaluations done regarding bound sizes for P-1. It was James Heinrich who mentioned this in another topic. I am sure you remember [URL="https://www.mersenne.ca/exponent/1277"]M1277[/URL]. In 2017, someone ran a P-1 on it with B1 at 5-trillion and B2 at 400-trillion. I suppose whoever did not mind the wait. |
Of course "optimal bounds" depends on the "factored up to" bits. A first approximation for 100M exponents may be:
factored-to, B1, B2, chance 77: 1.2M 40M 4.2% 78: 1M 35M 3.6% 79: 1M 30M 3.2% 80: 0.9M 25M 2.7% So, if you use 1M/30M (the default) you should be "just fine" for any bit-level :) PS: and don't use my previous recommended values of 500K/30M, B1 too low relative to optimal. |
It appears first-time P-1 tests from [I]Primenet[/I] have passed 100,000,000 in magnitude. So, I have decided to run a test to see what the time cost would be with [I]gpuOwl[/I]:
[QUOTE]B1 = 1000000 B2 = 35000000 [/QUOTE]I put the same exponent into the newest [I]Prime95[/I] just long enough to see what bounds it would use: [QUOTE]B1 = 770000 B2 = 13475000 [/QUOTE]These do not seem to be much different than what I saw 5 years ago. It seems to me, that as the exponents grow in size, P-1 bounds should follow along. Nothing would please me more than finding a nice big fat factor. My record is 39 digits. :grin: |
There is a new P-1 calculator in the pm1/ folder in GpuOwl. It comes as a python script ("pm1.py") and a C++ executable ("pm1"). The C++ version is very simple and only outputs the computed probability of a factor given the exponent, factored-to bits, B1, B2. The python script has an effort model (that can be tweaked in the source if desired), and based on that can propose some sort of "good" bounds. Example:
[CODE] ~/gpuowl/pm1$ ./pm1 100000000 77 1000000 30000000 3.91% (first-stage 1.53%, second-stage 2.38%) [/CODE] [CODE] /gpuowl/pm1$ ./pm1.py 100000000 77 Min: B1= 260K, B2= 7000K: p=2.38% (0.80% + 1.58%), work=0.96% (0.45% + 0.52%) Mid: B1= 600K, B2= 20000K: p=3.38% (1.22% + 2.16%), work=2.41% (1.04% + 1.40%) Big: B1= 1000K, B2= 34000K: p=4.01% (1.53% + 2.47%), work=3.99% (1.73% + 2.31%) [/CODE] What you see in the above output is the chance of finding a factor, and also the cost of running the P-1 with those bounds. Personally I would go with "big" bounds above -- even if they provide not much time benefit vs running just the PRP, they do provide a significant chance of a factor which is cool. |
[QUOTE=preda;553981]There is a new P-1 calculator in the pm1/ folder in GpuOwl... [/QUOTE]
That's very interesting. Not in my downloaded archive though. [B]v6.11-364-g36f4e2a.[/B] Windows x64. Unless you have something newer... |
[QUOTE=storm5510;553996]That's very interesting. Not in my downloaded archive though. [B]v6.11-364-g36f4e2a.[/B] Windows x64. Unless you have something newer...[/QUOTE]
I have not incorporated any of the tools folder in the Windows builds I've posted. Until recently it was all python or shell script, no c. I haven't identified a python compilation method yet that both (a) succeeds and (b) produces code small enough to fit in a forum attachment. (But I have eliminated several candidate approaches.) |
| All times are UTC. The time now is 02:37. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.