![]() |
|
|
#12 |
|
Nov 2003
11101001001002 Posts |
|
|
|
|
|
|
#13 |
|
Jan 2015
11·23 Posts |
|
|
|
|
|
|
#14 | |
|
Nov 2003
22×5×373 Posts |
Quote:
I imagine that you are just another wannabee who is unwilling to put in the learning effort. |
|
|
|
|
|
|
#15 | |
|
Nov 2003
22×5×373 Posts |
Quote:
I am very helpful. TO PEOPLE WILLING TO PUT IN THE EFFORT TO LEARN. This clearly does not include you. |
|
|
|
|
|
|
#16 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2×5×7×139 Posts |
Quote:
The secret is to not to get too upset when someone is really silly. Or insensitive. Or stupid. I also have a problem with this. I'm told to "take deep yoga breaths". "How do I do that while I'm screaming!?!?!? "You will learn, eventually. Or, you won't. |
|
|
|
|
|
|
#17 |
|
"Forget I exist"
Jul 2009
Dumbassville
100000110000002 Posts |
even without realizing how to code the algorithms used, I can see a use to finding more small factors for the mersenne project. As the mersenne numbers with co-prime exponents can't share a factor, the more primes/factor candidates cleared at lower levels the less low level factor candidates for higher exponents that may be needed to be checked ( assuming you keep track of them all as being eliminated or not as possible for future exponents). I mostly play around with idea's ( not necessarily good ones) to be honest instead of running code ( except maybe PARI/gp). edit: of course that's in theory in practice eliminating by division may be too much effort ( aka more than say just testing all that are left for divisibility or even maybe an LL test or two).
Last fiddled with by science_man_88 on 2015-09-30 at 00:53 |
|
|
|
|
|
#18 |
|
Romulan Interpreter
Jun 2011
Thailand
7×1,373 Posts |
Frankly answer? It doesn't.
Some people like to find big factors for small mersenne numbers and/or their computers are not enough fast or modern or free (they can have good computers, but doing more useful real-life work) to contribute to the project, so they get ECM assignments. This does not help the project at all, remember the goal is to find mersenne primes, and factoring mersenne numbers already known to be composite does not help the project. Now, trying to help you a bit in understanding what ECM does (honestly, RDS is right here, and it wasn't him this time who started name-calling. On the other side, I understand you too, the article about ECM on mersennewiki is a total mess - but the wikipedia article is well written). To understand what ECM does, you may have first to understand what P-1 does. You can read a good description of how that ties to the project on the very site of the project, here. It all starts with Fermat's theorem which says (equivalently) that if q is a prime, then for any number b less than q, called base, we have bq-1=1 (mod q), so if you somehow succeed to find a multiple of q-1, i.e. E=n*(q-1), you will have bn(q-1)-1=0 (mod q), i.e. you got a multiple of q if you raise b at the power of E. Then, assuming q divides your number m, (m=2^p-1 in our case, for some prime p), taking a gcd of bE-1 and m, will reveal the factor q. The rest is only "tricks" how to compute the E to "make sure" that E contains "multiples" of as many as possible (q-1)'s. You could take any combination of primes and multiply them together, to make your E, but the most efficient, probabilistic, is to start with small primes, because many numbers factor into small primes (i.e. they are "smooth"). That is where Mr. Pollard came in with his way of selecting B1 and B2, and computing that E as a LCM(blah blah blah). B1 and B2 are called bounds. We have a base, some bounds, and a bound-smooth q-1. Generally, you can see P-1 and ECM as two different ways to compute that E. Well, I know some guys here will be all over my head now, but for what you want, is enough. ECM uses a more complex method, not multiplying prime numbers, but points on some curve, according with some "multiplication rule". If that does not reveal a factor at the end (when you take the GCD), then you change the curve and do it again. Doing "P-1" is somehow equivalent to "doing ECM on a single curve", in the sense that if P-1 would find a factor, then running a single elliptic curve with the same B1 and B2 will also find that factor. It may find additional factors too. Generally, P-1 will find a factor q if (q-1) is "power smooth" (i.e. all its prime factors raised at their powers are under B1). For the same B1/B2, ECM may find a factor if q-x is power smooth, for some small x, including x=1. Each curve will "hit" some different values of x, including x=1. So, one-curve-ECM (call it "1-ECM") will find all factors that P-1 finds, for the same B1/B2, and possibly some more (i.e. it can find factors which P-1 can not find, for the same bounds). Why than we don't use 1-ECM instead of P-1 for mersenne numbers? That is partially because P-1 is not only extremely easy to implement, but also very efficient here, due to the form of the factors. Mersenne factors are always q=2kp+1, so the q-1=2kp is smoth if k is smooth. If you pick a base which is not 2, and raise it at a power of 2p (you can't use 2, because 22p=1 (mod m), so you only find trivial factorization), be this c, then your c^E need only to be a multiple of k, to reveal a factor when you gcd. Then you have a "step up" of 2p in the algorithm already, you can "reduce" B1 and B2 (i.e. you may need higher B1 and B2 limits for ECM to find a factor that is found by P-1 modified for mersenne numbers), therefore increasing the speed, or you can keep them and get the same speed but for a higher chance to find factors. Therefore, ECM is suitable only when applied to small exponents p, where it has a comparable speed but higher chance to find a factor (due to that "q-x" trick, where x can be different of 1). But for small p, we already know m=2^p-1 is composite. So, you don't help the project (did I say that before?) too much. But people still like to find big factors, and for that, ECM is the best it can be, where P-1 failed. Also, as someone pointed already, TF is hard for small exponents (TF gets easier when the exponent increase, P-1 gets harder, and ECM gets harder). So, for small exponents, ECM is the best, because is speed-comparable to P-1, has a higher chance to find factors than P-1, and is time-easier than TF (which is hard, in the sense that it needs a lot of time). But low ECM does not help the project. It only satisfies vanities. It may help other people/projects/making tables of factors (like Cunningham numbers project, etc) if you find a big factor for some small mersenne without known factors. But that's it. Last fiddled with by LaurV on 2015-09-30 at 05:49 |
|
|
|
|
|
#19 | |
|
Aug 2002
Buenos Aires, Argentina
2×683 Posts |
Quote:
So why does someone uses P-1? The response is that running one curve of ECM is about 10 times slower than running P-1 with the same bounds if you compute the number of modular multiplication. In http://www.mersennewiki.org/index.ph...c_curve_method you can find the formulas needed. Another response if that since the factors of Mersenne numbers 2p-1 have the form 2kp+1, you get that q-1 is always multiple of p, so the algorithm P-1 has greater chance of success of finding the prime q than if you run the algorithm with a random number when using the same bounds B1/B2. This benefit does not apply to ECM because the group factor is not q-1. For the memory hungry part, that is an optimization of step 2, when computing primes up to bound B2 (you can see that in the Mersennewiki also). It is possible to use less memory but in this case step 2 will be a lot slower. |
|
|
|
|
|
|
#20 |
|
"Forget I exist"
Jul 2009
Dumbassville
26·131 Posts |
|
|
|
|
|
|
#21 |
|
Aug 2002
Buenos Aires, Argentina
2·683 Posts |
|
|
|
|
|
|
#22 |
|
"Forget I exist"
Jul 2009
Dumbassville
26×131 Posts |
I guess I was thinking about P-1 but couldn't it be limited to x=2*l*p+1 therefore they all end up being multiples of p for mersenne factor candidates ?
Last fiddled with by science_man_88 on 2015-09-30 at 15:44 |
|
|
|