mersenneforum.org P-1 discussion thread
 User Name Remember Me? Password
 Register FAQ Search Today's Posts Mark Forums Read

 2010-02-06, 12:18 #1 Rincewind     Oct 2006 1478 Posts P-1 discussion thread I'm new to this project, and want to know if there are some reasons which stand against P-1 or ecm? According to the project "XYYXF" a factor with max. 20 digits needs about 30 curves and a B1-Value = 2000 to be found with ecm. And if I'm not wrong we're sieving for factors with 16 digits at the moment. So could ecm (P-1) be another method for "prechecking" the candidates before their PRP-testing?
 2010-02-06, 12:29 #2 philmoore     "Phil" Sep 2002 Tracktown, U.S.A. 100010111112 Posts When we started the project, P-1 was definitely not an efficient way to eliminate prp candidates, even though our sieve depth was low. Now that prp tests are taking 1.5 days on a single core of my older machine (Pentium D), and probably even the better part of a day on newer, faster machines, we should revisit P-1. Can't Prime95 compute an optimal P-1 level, if the sieving depth and number of tests saved is set? I've actually been wanting to post on this the last couple of weeks and just haven't gotten around to it. I honestly expect that ECM will not be able to find nearly as many factors per unit of time as P-1, but we may be getting to the point where P-1 is of more immediate help to the project than further sieving.
2010-02-06, 12:52   #3
Mini-Geek
Account Deleted

"Tim Sorbera"
Aug 2006
San Antonio, TX USA

17×251 Posts

Quote:
 Originally Posted by philmoore Can't Prime95 compute an optimal P-1 level, if the sieving depth and number of tests saved is set?
Yes. I've just checked it, and with "Pfactor=1,2,6180184,40291,50,2", i.e. the bit depth set to 50 (the current factors are 51 bits, or 16 digits, long) and assuming 2 LL tests saved if a factor is found, and allowing 700 MB memory usage, (I don't know if it'd actually want to use all that) it decides to do P-1 with these settings:
Code:
Optimal bounds are B1=25000, B2=212500
Chance of finding a factor is an estimated 0.519%
A very low chance of a factor, but the estimated time is 18 minutes, so it's only 0.643% of the length of the PRP test. It's probably near the borderline, but Prime95 seems to think it'd be more efficient overall to do P-1.
Quote:
 Originally Posted by Rincewind According to the project "XYYXF" a factor with max. 20 digits needs about 30 curves and a B1-Value = 2000 to be found with ecm. And if I'm not wrong we're sieving for factors with 16 digits at the moment.
Quote:
 Originally Posted by philmoore I honestly expect that ECM will not be able to find nearly as many factors per unit of time as P-1, but we may be getting to the point where P-1 is of more immediate help to the project than further sieving.
Actually, to find a factor with 20 digits, you should expect to do 76 curves at B1=11000, B2=1684420. That would take far longer than the PRP test. (~6 days, vs ~2 days for the PRP) Definitely not a good idea.

 2010-02-06, 13:23 #4 henryzz Just call me Henry     "David" Sep 2007 Cambridge (GMT/BST) 10111000100002 Posts in 620 seconds i did a p-1 test on 2^5720440+40291 [Work thread Feb 6 12:50] Optimal bounds are B1=20000, B2=160000 [Work thread Feb 6 12:50] Chance of finding a factor is an estimated 0.425% One prp test would have to take 145882 seconds(40 hours) to be optimal. You plan to do a doublecheck I think so 72941 seconds(20 hours). So as long as one prp test takes longer than 20 hours then p-1 would be worthwhile. Last fiddled with by henryzz on 2010-02-06 at 13:23
 2010-02-06, 18:06 #5 philmoore     "Phil" Sep 2002 Tracktown, U.S.A. 3·373 Posts Thanks, Mini-Geek and Henryzz. The low probability of finding a factor (about 1 in 200 P-1 tests) along with the fact that we would have spent almost as much time doing P-1 tests as we would have saved indicates that up until now, we haven't missed out on much by not doing P-1, but it looks like it is becoming more worthwhile now that prp tests are taking longer. My take on double-checking is that we may or may not do a double-check, so perhaps it might be more appropriate to set the number of prp tests saved to 1, or something close to that. Then P-1 becomes primarily a way of speeding up first time prp testing. Don't forget that we are still sieving 2M-50M, so there is a chance that a factor found by P-1 could later turn up in the sieve. I'm going to play around with it, and perhaps we can set up a new sticky thread for P-1 reservations soon. Last fiddled with by philmoore on 2010-02-06 at 18:12
 2010-02-06, 18:29 #6 philmoore     "Phil" Sep 2002 Tracktown, U.S.A. 3×373 Posts I tried "Pfactor=1,2,6069928,40291,50,2" with memory usage at 256K and it started with B1=25000, B2=200000. However, when I reduced the number of prp tests saved to anything from 1 to 1.9, it said that P-1 factoring was not needed. So we really are at the margin, where it is just barely justifiable with double-checking assumed, but I expect that by the time prp testing gets up to 7 million bit exponents, it will be paying off. I'll do some more checking with larger exponents. Thanks, Rincewind, for starting this thread at this time.
2010-02-06, 18:33   #7
henryzz
Just call me Henry

"David"
Sep 2007
Cambridge (GMT/BST)

24·32·41 Posts

Quote:
 Originally Posted by philmoore I tried "Pfactor=1,2,6069928,40291,50,2" with memory usage at 256K and it started with B1=25000, B2=200000. However, when I reduced the number of prp tests saved to anything from 1 to 1.9, it said that P-1 factoring was not needed. So we really are at the margin, where it is just barely justifiable with double-checking assumed, but I expect that by the time prp testing gets up to 7 million bit exponents, it will be paying off. I'll do some more checking with larger exponents. Thanks, Rincewind, for starting this thread at this time.
Looks like you need to use more memory to make it just worthwhile. My benchmark used over 500MB of memory. Yours only 256KB(i presume you meant MB).

 2010-02-06, 18:49 #8 philmoore     "Phil" Sep 2002 Tracktown, U.S.A. 3×373 Posts I don't think that memory was the critical issue, as my B1 and B2 bounds were actually larger than yours. The only thing I changed was the number of prp tests saved. Last fiddled with by philmoore on 2010-02-06 at 18:50 Reason: Yes, MB!
 2010-02-06, 20:41 #9 Mini-Geek Account Deleted     "Tim Sorbera" Aug 2006 San Antonio, TX USA 17×251 Posts Assuming 50 bits of sieving, and (first column) tests saved, Prime95 starts wanting to do P-1 at n=(second column)M (checking every n=500K). Code: 1.0 11.0 1.5 7.5 2.0 5.5 Or, put another way: Assuming 50 bits of sieving and 1 test saved, Prime95 doesn't think you should do P-1 until 11M. Assuming 50 bits of sieving and 1.5 tests saved, Prime95 doesn't think you should do P-1 until 7.5M. Assuming 50 bits of sieving and 2 tests saved, Prime95 thinks you should've started P-1 at 5.5M. So whether P-1 should be done now is entirely dependent on if we expect to double check these numbers. Past 11M it should be done either way. Last fiddled with by Mini-Geek on 2010-02-06 at 20:42
 2010-02-06, 20:43 #10 philmoore     "Phil" Sep 2002 Tracktown, U.S.A. 3×373 Posts I did a few more experiments, setting the number of prp tests saved to 1.7. My reasoning here is that if and when we get a systematic double check going, it will probably lag behind first time checking by a factor of 2 to 3, which gives us about a 20-25% chance of finding the last probable prime before the particular exponent gets double-checked. Under those circumstances, Prime95 decides that P-1 factoring becomes worthwhile somewhere between 7.25 million and 7.5 million. That assumes of course that the sieving depth is still 50 bits, it could well be around 51 bits by then. (It is currently about 50.1, I think the program will accept a floating point value.) Double checking is currently almost complete to 1.25M (for 40291 only), and I am continuing on the range 1.25-1.40M. We have done spot checking in higher ranges, but we'll probably continue to discuss what sort of combination of systematic double checking and spot checking makes the most sense.
 2010-02-08, 18:46 #11 philmoore     "Phil" Sep 2002 Tracktown, U.S.A. 3×373 Posts I am doing an experiment, doing some P-1 factoring starting at 6.2 million with bounds B1=25000 and B2=200000. I will remove candidates from the prp work files as I find factors. Worst case, this doesn't pay off until (and if) we double check, but at least we will eliminate some prp tests at a time that the sieving has slowed down. At any rate, I will be interested in the data, and will post the factors as I find them. It will be interesting to see how many factors would have been found with a little more sieving, and how many would not be found for quite a while.

 Similar Threads Thread Thread Starter Forum Replies Last Post jasong Twin Prime Search 311 2010-10-22 18:41 philmoore Five or Bust - The Dual Sierpinski Problem 83 2010-09-25 10:20 philmoore Five or Bust - The Dual Sierpinski Problem 66 2010-02-10 14:34 clowns789 Soap Box 3 2006-03-09 04:05 Citrix Prime Sierpinski Project 15 2005-08-29 13:56

All times are UTC. The time now is 19:25.

Sat Sep 18 19:25:00 UTC 2021 up 57 days, 13:53, 0 users, load averages: 1.21, 1.39, 1.36