![]() |
|
|
#23 | |
|
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
3×52×71 Posts |
Quote:
If you are a programmer you should consider getting the source and coding it yourself so it can process large batches of potential P-1 assignments and calculate the expected success rate and the GhzDays effort. If you choose a few different percentages vs effort and graph them you will note that at some point it takes a LOT more work for a small percentage improvement. I try to find good balance between expected factors found vs. GhzDays effort to find them. In my experience multiple passes with increasing bounds is inefficient. If you only use B1 then each successive Stage 1 run will continue where it left off (assuming you save the work files). However, any time you increase B2 it runs the entire Stage 2 from the start whether or not you change B1. I prefer to calculate the bounds that get the number of factors I want and use those bounds with one pass; starting with the exponents with the best odds. |
|
|
|
|
|
|
#24 |
|
Jul 2003
Behind BB
2·7·11·13 Posts |
Thanks, Wayne. That confirms what I was beginning to suspect about P-1 strategy. I think 2.6M will be very close to the 1999 unfactored goal after my initial pass is complete. If necessary, finishing that range with a second pass shouldn't be too difficult. I should be finished with 2.6M by the end of 2019.
2.8M will take a little longer with my measly i5, but I'll keep plugging away at it. The next pass over that range will be a little more strategic. |
|
|
|
|
|
#25 | |
|
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
3×52×71 Posts |
Quote:
|
|
|
|
|
|
|
#26 | |
|
"Bob Silverman"
Nov 2003
North of Boston
5·17·89 Posts |
Quote:
probability of finding a factor given that the method failed with a prior given B1. Extending P-1 just isn't worth the effort. It would be much better to run ECM instead. |
|
|
|
|
|
|
#27 |
|
Jul 2003
Behind BB
2·7·11·13 Posts |
Dr. Silverman, thank you for the recommendation. I considered running ECM curves.
Here is an exponent I considered: 2693501 P-1 has already been performed on this exponent with bounds, B1= 670000 and B2 = 6700000. The probability of finding a factor with those bounds was 0.03856. If I complete another P-1 calculation, with bounds, B1=2000000, B2=50000000, the probability of finding a factor (given current trial factoring limit, but assuming no prior P-1) will be 0.06294. I used the probability calculator here. I estimate the conditional probability using subtraction (how bad is this?): 0.06294-0.03856 = 0.02438. The P-1 calculation will take 21 minutes on my machine and I expect to find a factor for every 41 (1/0.02438) exponents that I test with similar bounds. So, I expect to find 1 factor every 14.35 hours via P-1. Each ECM curve for M2693501, at B1=50000 and B2=5000000, takes about 5 minutes on my machine. Completing 214 such curves will take about 18 hours. Would completing the 214 ECM curves at B1=50000 be comparable to trial factoring to 83 bits? That's how I came up with an estimate for the probability of finding a factor via ECM: 0.1139. With these crude estimates of the probability, I anticipate finding a factor via ECM in this range about once a week. Experience has not born out the P-1 estimate above of one factor every 14 hours; it has been more like one factor every day. I note that there has been some ECM already performed on these exponents and I'm using very crude estimates. Please correct me if I'm doing anything wildly incorrect. However, at this point it appears that additional P-1 in the 2.6M range is more efficient at finding factors than ECM. |
|
|
|
|
|
#28 |
|
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
3·23·89 Posts |
I believe that the conditional probability for the P-1 rerun should be (P(second) - P(first))/(1-P(first)). This assumes that the search space for the first run is completely contained within the second run(and is also quite possibly messed up by tf considerations). As such the search space is smaller for the second run. This provides a value of 0.03968 for your example 2693501.
These numbers do not take into account any ECM that has already been done. I am not sure what the average amount of ECM done so far on numbers this size is but 2693501 has 33 curves done. This is 33/280 of t25 and as such, there is a 1-e^(-33/280)=11.1% chance of any 25 digit factor having been found already by ECM. This will be higher for smaller factors of course. Please feel free to correct my hastily done maths on this. It would be interesting to make a calculator that took all known work into account as well as the expected size of the factors and showed the probability of any new work finding a factor of x digits graphically. I might look into doing this if I can work out the formulas for P-1 and ECM probabilities. |
|
|
|
|
|
#29 | |
|
"Bob Silverman"
Nov 2003
North of Boston
5·17·89 Posts |
Quote:
You are comparing the wrong things. I will accept your probabilities as correct. [your conditional probability computation is not correct, however]. You need to divide by (1- P1) Running P-1 with limits B1, B2 is the same as running a single elliptic curve. [although the computations are simpler and hence faster] (one does need to adjust the size of the candidate by log(exponent) because P-1 is always divisible by the exponent.) Running a second elliptic curve with the same limits will double the probability of success. Increasing B1, B2 for P-1 does not double the probability of success unless B1, B2 are greatly increased. You are comparing running P-1 with B1, B2 against running another ECM curve with much SMALLER limits. And running ECM gives multiple independent chances. P-1 does not. Read my joint paper with Sam Wagstaff: : A Practical Analysis of ECM. |
|
|
|
|
|
|
#30 | |
|
"Curtis"
Feb 2005
Riverside, CA
2×2,927 Posts |
Quote:
Masser is comparing the rate of found factors by P-1 (at large bounds as stated) to the rate by ECM (at small bounds). If that's the wrong thing to compare, what is the right thing? |
|
|
|
|
|
|
#31 | |
|
"Bob Silverman"
Nov 2003
North of Boston
1D8D16 Posts |
Quote:
Suppose we have the choice of (say) running P-1 again with (say) limits kB1, kB2 or running a elliptic curve with limits B1, B2. [for some k]. The latter choice will double the probability of success. The former will not. The former will take less time, but the latter will give greater likelihood of success. The former will allow more numbers to be tested in a fixed amount of time. If we choose k to spend the same amount of time for extending P-1 or to run ECM that latter should be more efficient OTOH, if one places a BOUND on the time to be spent, i.e. one wants to spend time T either running more curves or extending B1,B2 for P-1, it may be more effective to extend P-1 because (as already noted) one must lower B1,B2 for the elliptic curves because point addition on curves is much more expensive than simple exponentiation. I have not computed the effect that attempting 2^p - 1 has on P-1. The form of the numbers gives that P-1 is always divisible by p. This effectively reduces the size of the numbers that "must be smooth" by log(p). This may give a sufficient advantage to make P-1 more effective when bounding the run time. Note that this advantage disappears if one were attempting to factor numbers of no particular form. If one is willing to spend an arbitrary amount of time on each candidate it is clear that running multiple elliptic curves will be far more effective than raising the P-1 bounds, especially as the factors get larger. |
|
|
|
|
|
|
#32 |
|
Jul 2003
Behind BB
37228 Posts |
Thank you all for the feedback. I have downloaded the Silverman-Wagstaff paper and will read it, but I will probably need to brush up on some of the background before I fully comprehend it. Thanks again.
|
|
|
|
|
|
#33 |
|
Jul 2003
Behind BB
2·7·11·13 Posts |
New (to me) result feedback:
Splitting composite factor 72115741492408141371057158919540730748664584042639 into: * 57330969015562354090032601 * 1257884573219624043651239 https://www.mersenne.org/report_expo...2645329&full=1
Last fiddled with by masser on 2019-11-28 at 06:19 |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Redoing factoring work done by unreliable machines | tha | Lone Mersenne Hunters | 23 | 2016-11-02 08:51 |
| Sieving reservations and coordination | gd_barnes | No Prime Left Behind | 2 | 2008-02-16 03:28 |
| Sieved files/sieving coordination | gd_barnes | Conjectures 'R Us | 32 | 2008-01-22 03:09 |
| P-1 factoring Q&A thread | Unregistered | Software | 27 | 2005-06-11 05:32 |
| 5.98M to 6.0M: redoing factoring to 62 bits | GP2 | Lone Mersenne Hunters | 0 | 2003-11-19 01:30 |