![]() |
|
|
#34 |
|
Jun 2003
100100100012 Posts |
fivemack's and 10metreh's explanations of P-1 omit one significant fact. For this reason, and because explaining the same thing in several different ways can be helpful, I'll offer my own.
P-1 Recall that every factor of Mp is in the form 2*k*p+1 for some k. The P-1 algorithm will find a prime factor f if k is (B1,B2)-smooth. What that means is that P-1 will find a prime factor f of Mp during stage 1 if the highest power of every prime in the factorisation of k is less than B1. It will find a factor in stage two if the highest power of one prime in the factorisation of k is between B1 and B2, and all other highest powers of primes are less than B1 That assumes a (theoretically) optimal implementation of stage 2. For practical reasons prime95's implementation in the past only allowed a single unexponentiated prime between B1 and B2. I've not look at the code recently, so I don't know if this is still the case. If there are two (or more) prime factors of Mp that meet the above smoothness criteria, then the P-1 algorithm will return their product. (Additionally, there is an extension to the algorithm which result in it occasionally throwing up prime factors which don't meet the above smoothness criteria, but which are rather hard to characterise.) ECM I don't understand how ECM works. What follows is what I have gleaned from various explanations. Recall that P-1 will find a prime factor f of Mp if k=(f-1)/(2*p) is (B1,B2)-smooth. Here we are making use of the special form 2*k*p + 1 of the factors of Mp. In the general case of factoring N, whose factors have no special form, the P-1 algorithm will find a prime factor f if (f-1)/2 is smooth. For conceptual simplicity, let's also forget about that 2. P-1 will find f if f-1 is smooth. ECM will find a factor f if f+b is smooth for some small b which depends upon the curve and may be positive or negative. ECM is therefore in practical terms a generalisation of P-1. As has already been noted, the latter is equivalent to a single curve of the former, with b=-1, run to the same bounds, except that it's a faster algorithm, and for Mersenne numbers Mp it gets the factor 2*p in f-1 thown in for free. Last fiddled with by Mr. P-1 on 2009-04-03 at 11:25 |
|
|
|
|
|
#35 | |
|
Noodles
"Mr. Tuch"
Dec 2007
Chennai, India
3·419 Posts |
Quote:
The largest prime factor of the group order should be less than B2 and the other prime factors should be less than B1. For p+1 algorithm, the group order is not always p+1, it can be p-1 also. More precisely the group order is p+1 when For finding Elliptic Curves over Finite Fields Consider the affine elliptic curve in which one point is assigned to the point at infinity O, the identity element. For the projective form, We define an addition operation, that makes the set of points in the elliptic curve to be an Abelian Group. where where the slope We define the group order #E, to be the number of points in the elliptic curve. Since it is an abelian group, for any point P, we will have [#E]P = O, where [k]P means that the point P is multiplied k times. The factor p is returned when #E is (B1,B2)-smooth. It is the Z coordinate that determines the point at infinity, since for the point at infinity, the Z coordinate is 0. We succeed if [#E]P = O (mod p) and not [#E]P = O (mod N). We then calculate p = GCD(Z, N). So, we use the form of elliptic curve, that uses the Z coordinates, namely the Montgomery form (X:Z), that does not matter for Y coordinate. We use the Montgomery form of the elliptic curve, not the above Weierstrass form. For Montgomery form, the group order is always a multiple of 4. Suyama found out a parameterization for which the group order is always a multiple of 12. We take a value of sigma, that is Then we define Then the group order is always a multiple of 12. We take the starting point (X:Z) to be For Montgomery coordinates, we have to perform addition without the use of Y coordinates. Clearly for given X coordinate, there will be 2 distinct values of Y coordinate, such that their sum is p. Let P1 = If we define then we can calculate Note that for Suyama's parameterization, we have that A = 1 and B = 0. So, we will have lesser number of field multiplications for point addition, and the elliptic curve addition operation to be homogeneous. For doubling a point, let We can then compute that Last fiddled with by Raman on 2009-04-04 at 08:51 |
|
|
|
|
|
|
#36 | ||
|
Jun 2003
7×167 Posts |
Quote:
My description, however, was from the perspective of characterising the factors found (or findable) by the method. In that respect it is surely sufficient to note that "small b" covers a sufficiently wide range that a much larger number of factors are potentially findable with the method than for P-1. Quote:
The above is, at least in principle, non optimal. The idea behind stage 2 is that there may be a factor f which is "just out of reach" of stage 1, meaning that k=(f-1)/(2*p) for Mersenne numbers Mp, or k=(f-1)/2 for general N, has one prime factor q > B1. But there's another way f could be "just out of reach". k could have a factor q < B1 raised to a power higher than was included in stage 1. This is, in fact, more likely for small primes < B2/B1 than it is for the prime at B2. So if the latter is worth including, then so are the former. In principle we could use the same stage 2 algorithm to include them as well. In practice we probably don't, because the number of primes less than B2/B1 is usually so small that it's not worth the additional programming complexity to do so, and even if it was, the overhead associated with setting up a stage 2 calculation just for them is probably not worth it. It might, however, be worth coding stage 1 to include these slightly higher prime powers. |
||
|
|
|
|
|
#37 | |
|
Sep 2008
Masontown, PA
3810 Posts |
If I may "revive" this thread after a 3-month lapse.
First, thank you very much to Mr. P-1 for explaining ECM and P-1 factoring in ways that I can almost understand. Unlike you, however, I could not prove anything about either method. I can read, re-read, re-re-read, etc your explanation and the wiki's and any others I have found on-line and I will probably never understand it. I have a degree in Mathematics, but we never came close to anything along these lines, plus I went to a small, private, Liberal Arts college, where math and science weren't even top 10 priorities. I thank Raman, too, for attempting to guide me further, as well as Alex, though, undoubtedly, both of your explanations were as far above my head as Hubble. Second, I have read Dr. Silverman's paper a dozen times; obviously, I have no hope in understanding his brilliance. I appreciate the tables contained within it, though, and perhaps can wrap some part of my brain around those numbers, though I do not believe I could figure out what the B1 is following 260M, 850M, and 2900M, not that such a B1 would ever be practical. Back to the thread: Quote:
1. Is there a point at which SNFS testing will be begun? CAN it be begun? 2. Will the PrimeNet ECM progress page and/or Prime95 be updated to handle t65 testing? I have run B1s of 850M and 2900M on M1061 with success, though more than 50% of the time Prime95 seems unable to start and ECM test any larger than 260M. By this I mean it will appear to start ECM at 850M or 2900M, but the read-out never updates after stating it is using FFT length 48, and it does not finish even 1 curve after 1 day or even 1 week, but if I change the worktodo back to 260M, it works fine. 3. I am certain this has already been answered and that I present myself for further trouble by asking, but just to be certain, is it "worth it" to ECM test M1061 at 850M or 2900M, or, because M1061 can have a factor in up to 160 digits in length, are the chances of ECM finding a factor so small as to make ECM testing beyond 260M "pointless"? 4. I see that M1061 has been P-1d to B1= 1e11 and B2= 1e12. (hopefully I have that correct). Again, fully aware that this question will anger some, but is any further P-1 testing a "waste" at this point? Is there an "optimal" B1 and B2 for M1061? Should B2 be around B12? 5. Somewhere I've seen a "2/9" rule, or something like that? Would someone be so kind as to educate me as to what that is, please? I thank you all for your teachings, help, and patience. I welcome suggestions of any "remedial" math books I can read so as to possibly understand more and to stop bothering the forum. |
|
|
|
|
|
|
#38 | |||||
|
Nov 2008
2×33×43 Posts |
Quote:
Quote:
Quote:
Quote:
Quote:
|
|||||
|
|
|
|
|
#39 |
|
Tribal Bullet
Oct 2004
5×23×31 Posts |
Nobody has any plans to do so that I know of, and factoring M1061 using SNFS will not push any sort of computational boundary (because M1039 has been factored already). What you should be asking is if there will be any way to finish the job if it is started. The answer is no, unless you ask the same people who factored M1039, or wait for publicly available software to be advanced enough to succeed.
Last fiddled with by jasonp on 2009-06-30 at 15:40 Reason: 10metreh explained the 2/9 rule already |
|
|
|
|
|
#40 | ||||||
|
"Bob Silverman"
Nov 2003
North of Boston
11101100011012 Posts |
Quote:
beyond undergraduate level mathematics. It contains no "brilliance". Didn't you claim a degree in math? The paper does contain a novel idea; that of using the information gleaned from ECM failures, combined with Bayesian statistics, to estimate how to change ECM parameters for future trials. But the math behind this is elementary. Quote:
"figure out what the B1 is following 260M, 850M, and 2900M"... Huh??? This is gibberish. Your inability to discuss mathematics in a cogent manner makes me doubt your claim of having a degree in math. And you claim "not that such a B1 would ever be practical." Is vacuous. Clearly, at some point in the future such values of B1 will become practical. Quote:
Quote:
will occur for a small, select, set of numbers prior to running SNFS, general testing to t65 is out of reach with today's computer speeds. Quote:
have a math degree, yet resort to hand waving and poorly defined gibberish, i.e. "is it worth it". Aren't you capable of posing what it means to "be worth it"?? Can't you formulate the question in a mathematically meaningful way? Can't you define the problem? My paper discusses exactly when one should switch from ECM to a deterministic general method. Didn't you read it? The analysis only involves a small amount of Bayesian statistics; One switches when the time to factor the number WITH CERTAINTLY is less than the expected time to factor with ECM. The latter depends on a Bayesian estimate of the size of the factor one hopes to extract with ECM and this estimate is derived from information about the FAILURE of ECM trials, combined with a prior based on the theoretical distribution of the smallest prime factor. Note, however, that the analysis in my paper does not assume any constraints on computer hardware. Such constraints always exist in practice. Running ECM to e.g. b1 = 850M may not be possible because of lack of memory. Quote:
Exactly what are you optimizing? Can't you even formulate the question? Read my paper! It contains only undergraduate level mathematics. |
||||||
|
|
|
|
|
#41 |
|
Sep 2008
Masontown, PA
2·19 Posts |
I prefer the way they put it in Billy Madison better. http://www.youtube.com/watch?v=fEkWH8DB7b0
A simple, "shut up and go away" would have saved you time. I have contributed nearly 20,000 GHz-days to GIMPS in the nearly 10 years I have belonged, and who knows how much to the MM61 effort, because I love math and numbers and wanted to help the field in any way I could. I asked questions because I couldn't find or understand the answers on my own, because I am interested in learning both the answers and the methods for finding the answers, and because I knew there were many smarter and more knowledgable than I in this forum who could lead me to resolution far more quickly and easily than I could on my own. I was just trying to learn and understand and to possibly help others who also wanted to learn and understand. 10metreh, I thank you for your time, patience, and explanations. I hope I have neither offended nor bothered you in anyway. I appreciate your attitude, civility, and professionalism. You are a credit to this forum and the field of mathematics. I wish you well in your endeavors, as I do to all others in GIMPS. Thanks for nearly 10 good years. Good luck. |
|
|
|
|
|
#42 | |
|
Nov 2008
2×33×43 Posts |
Quote:
WARNING: If your name is Bob Silverman, do NOT read this spoiler. So there is one bad thing about having mod powers. Last fiddled with by 10metreh on 2009-07-01 at 06:31 |
|
|
|
|
|
|
#43 | |
|
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
614110 Posts |
Quote:
|
|
|
|
|
|
|
#44 |
|
Nov 2008
2·33·43 Posts |
When I try to put a mod on my ignore list, the forum won't let me.
Last fiddled with by 10metreh on 2009-07-01 at 15:34 |
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Modifying the Lucas Lehmer Primality Test into a fast test of nothing | Trilo | Miscellaneous Math | 25 | 2018-03-11 23:20 |
| P95 PrimeNet causes BSOD; small FFTs, large FFTs, and blend test don't | KarateF22 | PrimeNet | 16 | 2013-10-28 00:34 |
| launching mprime large FFTs torture test with no menu/interactions | graysky | Linux | 2 | 2012-07-26 07:54 |
| A primality test for Fermat numbers faster than Pépin's test ? | T.Rex | Math | 0 | 2004-10-26 21:37 |
| Using Factors to Eliminate Candidates | Mivacca2 | Math | 8 | 2003-03-25 16:52 |