mersenneforum.org Intermediate P-1 results
 Register FAQ Search Today's Posts Mark Forums Read

 2018-02-08, 19:09 #1 kruoli     "Oliver" Sep 2017 Porta Westfalica, DE 23·5·29 Posts Intermediate P-1 results Hey, everybody! Since P-1 is a huge part of GIMPS to find factors and there is now GCD powered by GMP, I was thinking about this: What would it be like to be able to do one or a few GCDs of intermediate results? Of course, this does not need to be a mandatory option. I'm not sure how the calculation of $b^a - 1 \text{ mod } n, \text{ where } a = \prod_{p \text{ prime},\ p < B_1} p^{e_p} \text{ with } e_x = \lfloor \log_x B_1 \rfloor$ is done internally in Prime95. Maybe one can find a factor when checking for the result of $$c_\text{max} > c \in \mathbb{N}$$ $\text{GCD}(b^{a_c}-1,n), \text{ where } a_c = \prod_{p \text{ prime},\ p < \left\lceil \frac{B_1 \cdot c}{c_\text{max}} \right\rceil} p^{e_p} \text{ with } e_x = \lfloor \log_x B_1 \rfloor$ with some predifined $$c_\text{max}$$. This is especially usefull when using huge bounds and the factor can be found early on. This could increase willingness to use higher bounds. Greetings, Oliver! Last fiddled with by Nick on 2018-02-09 at 11:43 Reason: Amended at kruoli's request
 2018-02-08, 19:18 #2 GP2     Sep 2003 13·199 Posts I'm not sure how long P−1 will continue to play a huge part. The larger exponents get, the easier it is to do TF to higher bit depths. On the other hand, the larger exponents get, the longer P−1 takes. At some point there will be diminishing returns to doing P−1 because most of the factors it might reasonably be expected to find will already have been found by TF. I was doing P−1 in the 85M range, and had a long dry spell. I started again recently, but wonder if it's still worth it.
 2018-02-09, 06:33 #3 S485122     "Jacob" Sep 2006 Brussels, Belgium 22×33×17 Posts When I look at the success rate of the P-1 my computers do (about 4 % on the current 85M range) and the time they spend on it compared to LL tests it is still worthwhile. The kind of factors are different : you will have to TF a long time to find a 80 or 90 bits factor in that range (85 bits average on my work in the 80M-85M range.) So, yes P-1 is still useful. The whole project cannot be reduced to TF. Jacob
 2018-02-09, 07:11 #4 kruoli     "Oliver" Sep 2017 Porta Westfalica, DE 23·5·29 Posts Can a moderator please change $$\frac{B_1 \cdot c_\text{max}}{c}$$ to $$\frac{B_1 \cdot c}{c_\text{max}}$$? Otherwise it would be useless.
 2018-02-09, 07:41 #5 kruoli     "Oliver" Sep 2017 Porta Westfalica, DE 100100010002 Posts My experience with P-1 is like Jacob's (S485122), in that there is a measurable use of it (and there is no decline to be expected, as far as I believe). Since TF requires double the work per bit level, I assume we can go a few levels higher in the next years, but definitely not something like 10 levels. Take a M85,000,000 exponent as example. PrimeNet wants factoring to $$2^{71}$$, GPU72 to $$2^{75}$$. Using Jacob's numbers, you would need to go up at least to $$2^{85}$$ to find half of the factors that P-1 should find. Doing that would require an estimated 92,185 GHzd in total. Going to larger exponent, e.g. M850,000,000, would only drop that cost by a factor of 10, but in that exponent range, $$2^{85}$$ is already the bit level limit of GPU72, so would have to go even further. I like the approach of finding an optimum between prefactoring and doing two LL-tests. P-1 is a big part of finding that approach! GP2; When I look at your P-1 stat page, it seems like you use really high bounds (an average of 258,6 GHzd per assignment). Of course, that's nice for a higher probability of finding a factor, but that's nearly the cost of a LL in the 85M range. But a success rate of more than 20 % is really nice! So I do not get completely how your "dry spell"* happened. *= The german word for that, Durststrecke, is much more descriptive, it combines Durst (thirst) and Strecke (route), so it mentions the urge (thirst) to reach something and the way left behind not being successful.
2018-02-09, 16:49   #6
GP2

Sep 2003

13·199 Posts

Quote:
 Originally Posted by kruoli GP2; When I look at your P-1 stat page, it seems like you use really high bounds (an average of 258,6 GHzd per assignment).
For a time I was doing P−1 on very small exponents. I experimented with doing stage 1 with mprime and doing stage 2 with GMP-ECM. So that probably skews the averages completely.

Right now I am only doing the automatically assigned P−1 exponents with automatically assigned B1 and B2 bounds, which are currently in the 85M range.

2018-06-24, 13:44   #7
kriesel

"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

3·5·457 Posts

Quote:
 Originally Posted by GP2 I'm not sure how long P−1 will continue to play a huge part. The larger exponents get, the easier it is to do TF to higher bit depths. On the other hand, the larger exponents get, the longer P−1 takes. At some point there will be diminishing returns to doing P−1 because most of the factors it might reasonably be expected to find will already have been found by TF. I was doing P−1 in the 85M range, and had a long dry spell. I started again recently, but wonder if it's still worth it.
Take a look at the source code for prime95 or CUDAPm1. There's code in there to compute estimates of the odds of finding a factor and adjust the bounds to maximize the probability of a factor times the amount of time saved if one is found by avoiding n primality tests, where n is a number at the right of the assignment line (2 usually as PrimeNet issues them for manual assignments). In other words, it computes estimates of what's the rational better's best bets, within the memory constraints.

 Similar Threads Thread Thread Starter Forum Replies Last Post TaliaW Software 14 2019-03-15 17:56 MattcAnderson MattcAnderson 0 2017-05-08 02:32 zabig Information & Answers 3 2013-01-16 00:42 smh Software 1 2006-03-22 22:21 ndpowell Software 3 2005-06-20 22:57

All times are UTC. The time now is 04:49.

Mon Oct 3 04:49:51 UTC 2022 up 46 days, 2:18, 1 user, load averages: 1.09, 1.22, 1.23