 mersenneforum.org > Math Proof for the efficient algorithm used in trial factoring?
 Register FAQ Search Today's Posts Mark Forums Read 2019-04-22, 20:07 #1 neomacdev   Apr 2019 2·11 Posts Proof for the efficient algorithm used in trial factoring? Hi, I am well acquainted with the "very efficient algorithm for determining if a number divides 2P-1" as described at the link below. https://www.mersenne.org/various/mat...rial_factoring I've written my own implementation and have no doubt that it works as described. However, it is not obvious to me where this approach comes from. And I am curious about how to PROVE that this approach works. Specifically, why the approach of examining the exponent bit by bit, and conducting the repeated squaring, conditional multiplications by 2, and moding with the candidate factor should result in a value of 1 only when the candidate factor is indeed a divisor. Note, I've tried searching at length to find an explanation for why this works, but have not found anything. Perhaps because there doesn't appear to be a proper name for this approach, so my description isn't matching relevant results. If anyone can either share the proper name for this approach, or a link or other references to a proof, or any explanation what-so-ever that would allow me to define/derive a formal proof for this I would appreciate it. Thank you.   2019-04-22, 20:10   #2
preda

"Mihai Preda"
Apr 2015

23×149 Posts Quote:
 Originally Posted by neomacdev Hi, I am well acquainted with the "very efficient algorithm for determining if a number divides 2P-1" as described at the link below. https://www.mersenne.org/various/mat...rial_factoring I've written my own implementation and have no doubt that it works as described. However, it is not obvious to me where this approach comes from. And I am curious about how to PROVE that this approach works. Specifically, why the approach of examining the exponent bit by bit, and conducting the repeated squaring, conditional multiplications by 2, and moding with the candidate factor should result in a value of 1 only when the candidate factor is indeed a divisor. Note, I've tried searching at length to find an explanation for why this works, but have not found anything. Perhaps because there doesn't appear to be a proper name for this approach, so my description isn't matching relevant results. If anyone can either share the proper name for this approach, or a link or other references to a proof, or any explanation what-so-ever that would allow me to define/derive a formal proof for this I would appreciate it. Thank you.
I think the name is "left to right binary exponentiation" https://en.wikipedia.org/wiki/Modular_exponentiation   2019-04-23, 06:33   #3
kriesel

"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

23·19·29 Posts Quote:
 Originally Posted by preda I think the name is "left to right binary exponentiation" https://en.wikipedia.org/wiki/Modular_exponentiation
Added to #12 of "Concepts in GIMPS trial factoring", thanks. https://www.mersenneforum.org/showpo...23&postcount=6   2019-04-23, 06:45   #4
kriesel

"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

23×19×29 Posts Quote:
 Originally Posted by neomacdev Hi, I am well acquainted with the "very efficient algorithm for determining if a number divides 2P-1" as described at the link below. https://www.mersenne.org/various/mat...rial_factoring I've written my own implementation and have no doubt that it works as described. However, it is not obvious to me where this approach comes from. And I am curious about how to PROVE that this approach works. Specifically, why the approach of examining the exponent bit by bit, and conducting the repeated squaring, conditional multiplications by 2, and moding with the candidate factor should result in a value of 1 only when the candidate factor is indeed a divisor.
The eample was 223 = 1 mod 47. Subtract 1 from both sides. 223-1 = 0 mod 47
1) The method correctly implemented computes 2p mod f.
2) If and only if 2p=1 mod f, 2p-1 = 0 mod f, meaning remainder is zero, f divides 2p-1; f is a factor of 2p-1.

Last fiddled with by kriesel on 2019-04-23 at 06:48   2019-04-27, 09:59 #5 LaurV Romulan Interpreter   Jun 2011 Thailand 873410 Posts I think what the OP is looking for is this: 1 - to raise a number x to a power n, it means to multiply it by itself n times 2 - when you write a number (our n in this case) "in binary" each "bit" represents the highest power of 2 that can get into the "remaining" of the number, at that level For example, to write 13 in binary, you look for the highest power of 2 that goes into 13, and this is 8, which is 2^3. Then subtract 8 from 13, you are left with 5. The last power of 2 that goes in 5 is 4=2^2, and the remainder is 1; there is no 2=2^1 going into the remainder, therefore we have a 0 there, and at the end, 13 is 1101 in binary. This says that you can get 13 by adding (in decimal) 8+4+1 or 2^3+2^2+2^0. Note that you skipped 2^1, as you have a zero there. 3 - when you square a number raised to some power, its power doubles. For example if you square 9=3^2, the result is 81, which is 3^4. 4 - "doubling" a number in base 2 means to "add a zero" to the end of the binary representation of the number, the same way as multiplying a number with ten in base ten means to add a zero to the number in base ten. For example binary representation of 3 is 11, and if you add a zero to it, 110 binary is the representation of 6 decimal. Binary representation of 19 is 10011, and - you guessed! - 100110 binary is decimal 38. This is like adding 1 to each power of each digit, instead of 1*2^4+1*2^1+1*2^0, we have 1*2^5+1*2^2+1*2^1, same way as we would, in base ten, have 5040 (5*10^3+4*10^1) insted of 504 (5*10^2+4*10^0), after multiplying with ten (the base). Therefore, you can get any number in binary by starting with 1, doubling it every time, and from time to time, add a 1. To get 23, you start with 1, double, double (you have 4), add 1, double (you have ten), add 1, double, add 1. That says that the binary representation of 23 is 10111. You put a 1 for every "double, add 1" and a zero for every "double" without "add 1". Hehe. Now put it all together. To raise a number x at power 13, you start with 1 and compute: 1*x, square and have (1*x)^2, multiply by x, you have x^3, square again, you have x^6, don't multiply by x, square again and have x^12, multiply by x and have x^13. So, you solved it with few squares instead of doing 13 multiplications by x. Now, see the connection with the binary representation of 13? Squaring doubles the power. For the method, we disregard the first 1 and start directly with x because it is better so. In fact, this is not necessary, and it makes the method a bit confusing for the beginners. There is nothing wrong with the first bit, and you don't have to disregard anything. Just start with 1, and repeat: "square, if the bit is 1 multiply by x, square, if the bit is 1 multiply by x, square, if the bit is 1 multiply by x" etc. If you do this, you have always x after the first bit, that is why we "skip" it and start with x (in our case, x is 2). But there is nothing wrong with it. To make 25, you start with 1, double (2), add 1 (3), double (6), double (12), double (24), add 1 (25). That is because 25 is 11001 in binary. To raise a number x at power 23 (which is 10111 in binary), you start with 1 and compute: (((((((1*x)^2)^2)*x)^2)*x)^2)*x. To raise a number x at power 47 (which is 101111 in binary), you compute the expression above, then square once more and multiply by x (note that 47 is 2*23+1, therefore its binary representation just has an additional 1 at the end). And of course, (x^23)^2*x is what you need. You don't need to multiply x by itself 47 times, that would be silly... Now, for the modularity part, we take the modulus at every step, to avoid working with huge numbers, and we can do that because (a+b)(mod m)=a(mod m)+b(mod m). Note that there may be "shorter ways" to get x^n but you would need to store some intermediary steps, and there are complicate algorithms to find the shortest way, but that is another discussion. And I don't think it can be explained simpler than this, unless we start making powerpoint presentations, with animation... Last fiddled with by LaurV on 2019-04-27 at 10:45 Reason: spacing is killing me   2019-04-27, 20:08 #6 travisjank   Feb 2019 2 Posts You have me intrigued, I look forward to this powerpoint and animations!  Thread Tools Show Printable Version Email this Page Similar Threads Thread Thread Starter Forum Replies Last Post GreasyScooby Factoring 4 2018-04-27 13:48 Visu Math 66 2008-05-12 13:55 Citrix Factoring 6 2007-12-23 11:36 TheJudger Math 4 2007-10-18 19:01 Visu Factoring 22 2006-11-09 10:43

All times are UTC. The time now is 11:31.

Sun Sep 20 11:31:50 UTC 2020 up 10 days, 8:42, 0 users, load averages: 1.33, 1.21, 1.27