20070611, 09:19  #1 
Jun 2007
Rotterdam, Holland
5 Posts 
ecm_factor returning the same number as input
Hi, I am trying to use the GMPECM factoring library to create a sort of general purpose factoring algorithm. At the moment I just call the ecm_factor method for the number to be factorized, divide the number by the factor found by ecm_factor, and repeat the process until the remainder is a prime or 1.
For the B1 value I use some standard values found in the GMPECM documentation, and for the ecm_params I use the default values (I only call ecm_init(p) for the ecm_params). Sometimes though I encounter a problem with some integers I want to factor. Instead of returning a factor of the number, it returns the number itself, or a composite factor. If I try to factor that number again, the ecm_factor method returns 1 (factor found in stage 1), and the factor found is the same number as the input. This occurs for example with the numbers 27, and 14877. At the moment, this naturally causes an infinite loop in my program, since it keeps trying to factor that number and keeps getting the same result. I have tried several B1 values, from very small to very large, but each B1 value returns the same outcome. What can be causing this? Should I adjust any of the ecm_params values? (I am not a math genius by the way...) Thanks in advance! 
20070611, 11:12  #2 
"Nancy"
Aug 2002
Alexandria
2^{5}·7·11 Posts 
ECM finds any prime factor (or prime power) where the order of the group defined by the curve is smooth. It is perfectly possible to find several prime factors at once this way, or even all primes of the input numer, which means the input number is reported as the "factor." This happens mostly if there are several small prime factors present, with larger prime factors p_{i} is quite improbable that given curve parameters lead to several smooth curves reduced to the individual p_{i}.
The easiest way of avoiding the problem is getting rid of small primes with trial division first. Once you know that there are no primes <10^6 or so left, you can start ECM. Alex 
20070611, 11:36  #3 
Jun 2007
Rotterdam, Holland
5 Posts 
That sounds logical...
I think I'll just do that then... Thanks! 
20070611, 15:19  #4 
"Nancy"
Aug 2002
Alexandria
2^{5}×7×11 Posts 
Note: the bound 10^6 is much on the small side. At that point, you can expect that ECM with small B1,B2 parameters will more often spit out prime factors than composite factors but it is hardly the optimal threshold for switching to ECM. If your trial division code is reasonably efficient, higher bounds will make sense  just experiment a little and see how long it takes the trial division code to find an n digit prime, and how long it typically takes the ECM code to find it.
Alex 
20070611, 18:11  #5 
Aug 2002
Buenos Aires, Argentina
31·43 Posts 
The bound B1 = 1000000 is very high for small numbers, so you will not be able to "crack" them. I would start with B1 = 2000, then I'd continue with B1 = 11000, B1 = 50000 and so on. See the table of optimal parameters in order to know how many curves you have to run for each value of B1.

20070612, 03:07  #6 
Tribal Bullet
Oct 2004
3×1,163 Posts 
It's a very interesting problem to build a general findallfactors program, that uses a combination of algorithms in a way that minimizes the expected runtime. A complete implementation should do trial division, then pollard rho, then p+1 / p1 with small bounds, then put the remaining cofactor and any prime factors found into a structure that remembers each factor and whether it is prime or not. Then you should iterate through the composite factors, test for whether each is a prime power and then try ECM / p1 / p+1 with increasing bounds, then QS. When new factors are found, previously found prime factors are divided out and anything left gets added to the structure, then the process repeats. Composite cofactors that are small should be handled with dedicated SQUFOF or QS routines, as these can dependably split the cofactors.
For such a simplesounding job, it's a very large amount of code to do right. 
20070612, 07:36  #7  
Jun 2007
Rotterdam, Holland
5 Posts 
Quote:
@alpertron: Alex was talking about the bound for trial division, not the B1 bound for ECM. I am indeed using those optimal parameters list for my B1 values, and kind of extrapolated the list for smaller digit factors, which meant my lowest B1 has a value of 400 (may be kind of small, but it works (most of the time :P)). @jasonp: I think that would indeed be a good general factorization method. But because of my limited knowledge of the factorization algorithms, I'll just settle with a simpler, but less efficient way. Speed is not a very big issue here (although only using trial division would be too slow... :P) 

20070612, 09:52  #8 
"Nancy"
Aug 2002
Alexandria
9A0_{16} Posts 
It depends on what numbers you expect to be factoring. "Random" numbers usually have a few very small prime factors, so running ECM first and then trial division will almost always be a waste as you'll have to trial divide after all. Trial division is still the best way of recovering very small factors and should always be done first.
Alex 
20070612, 10:20  #9  
Jun 2007
Rotterdam, Holland
5_{16} Posts 
Quote:


20070612, 13:15  #10 
"William"
May 2003
New Haven
100100100100_{2} Posts 
You forgot about detecting and removing algebraic factors.

20070612, 14:54  #11 
"Nancy"
Aug 2002
Alexandria
2^{5}·7·11 Posts 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Recommendations for cutting down input  fivemack  Msieve  1  20171212 17:22 
Nvidia 364.xx drivers returning incorrect results  UBR47K  GPU Computing  2  20160611 22:38 
GPU GMPECM to higher input limit  wombatman  Factoring  5  20140614 04:44 
Prime hunters: I need your input :)  opyrt  Prime Sierpinski Project  6  20091228 17:42 
GMPECM and the Cunningham Input List  M0CZY  GMPECM  10  20061221 14:13 