20050603, 03:58  #1 
May 2004
2^{2}×79 Posts 
Pollard's Algorithm & That of Devaraja comparison
Many of you are aware of Pollard's algorithm.To illustrate the algorithm
given in the paper "A Generalisation of Fermat's Theorem" on www.crorepatibaniye.com/failurefunctions I will take up the same example i.e.16843009 (given for Pollard's elsewhere on the net). This can expressed as 2^24 + 65793. n 2^n+65793 factors 1 65795 5*13159 2 65797 19*3463 3 65801 29*2269 4 65809 65809 5 65825 5*5*2633 6 65857 11*5987 7 65921 65921 8 66049 257*257 _________ We stop here since we have already arrived at the first factor of the given number since 2 is a primitive root of 257 and 8 belongs to 2 mod(257). Prima facie this looks much simpler than Pollard's algorithm.Needless to say we cannot draw conclusions on the basis of one example; I will continue the discussion tomorow. A.K. Devaraj 
20050603, 11:28  #2  
Nov 2003
2^{2}×5×373 Posts 
Quote:
algorithm, please state it. And your "steps" are certainly far from clear. What relation does the last number (66049) have with the input? Please show the steps for (say) 11111111111111111111111111111111123 

20050603, 12:56  #3  
Bamboozled!
May 2003
Down not across
5×19×107 Posts 
Quote:
rather than in the post itself. I'm taking a quick look at it now. Paul 

20050604, 05:28  #4  
May 2004
474_{8} Posts 
Quote:
The last line of the second last para shd read: "16 belongs to 2 (md 257). In the language of "failure functions" this wd be n = 8 + 16k is a failure function i.e. if we were to substitute n satisfying this function( where k belongs to N) in 2^n + 65793 we will get a number exactly divisible by 257. And 2^24 + 65793 , the given number is exactly divisible by 257. Pollard's algorithm uses a) gcd, b) random values of a,b,c...and c) arbitrarily chosen polynomials. My algorithm uses exponential expressions which are must faster than polynomials.There is nothing arbi trary about the expression for once the number is given we can express it in an exponential manner as explained in my paper.Exponential expressions are much faster than polynomials and as x takes values from 1 to x0, we discover the smallest factor of the given number gnerally much before x reaches the value x0. I must admit that my algorithm, although very useful in factorising very large numbers, is a failure in the case of numbers like F6,F7 and F8 where very large prime factors are involved. I therefore suggest that a bechmark test involving 10 randomly chosen very large numbers be factorised both by rho(n) and a suitable program (like the one suggested by Maxal) be effected. A.K. Devaraj 

20050604, 13:42  #5 
Aug 2002
Buenos Aires, Argentina
2^{3}·3·5·11 Posts 
Sorry but I do not see how you can avoid computing gcds as in the Pollard rho algorithm.
It appears that in this example you were lucky, but in general you need about sqrt(p) iterations, where the number p is the prime you find with this algorithm. Without reducing mod N and computing gcds, the powers of 2 grow without limit making the algorithm unusable. For example, when the prime p to be found is about one trillion or 10^12 (easy to compute with Pollard Rho) your algorithm would be using numbers of size about one million digits if you do not reduce mod N. Please use another example. Best regards, Dario Alpern 
20050605, 06:00  #6  
May 2004
2^{2}·79 Posts 
Quote:
in Dec "04. 2^9780675417 + 37 This number has approximately 2.94 billion digits.Yet using onlya 10digit scientific calculator and this algorithm I could find that 3 & 11 are factors. Incidentally I give below a list of some of the nonfactors as they occur in the algorithm working till x = 24: 13,41,5, 53,,23, 101, 293, 61,,1061,.........1048613.. You may ask "what is the use of the list of nonfactors when we know that all prime numbers other than factors are nonfactors?" The reasons: a) nonfactors include "impossible factors" also .For example 7, 17 & 19 are impossible factors i.e. they cannot be factors of the given number no matter how large x is. b) The algorithm reveals the above as nonfactors, not in any probabality sense, but with absolute certainty . Ignoring the aspect of nonfactors the question is can Pollard's algorithm handle such huge numbers? I must, however, reiterate the limitation of my algorithm: it is useless in the case of numbers having large prime factors. I may be wrong but I still feel a benchmark test involving randomly chosen large numbers is desirable.Tks & regards' A.K. Devaraj 

20050605, 10:42  #7  
Aug 2002
Termonfeckin, IE
AC1_{16} Posts 
Quote:


20050605, 13:57  #8  
Bamboozled!
May 2003
Down not across
5×19×107 Posts 
Quote:
Trial division and Pollard rho are not very good for finding large factors either. Come to that, neither is ECM or P1P+1  if your idea of "large" is appropriately phrased. Paul 

20050605, 16:52  #9 
"Nancy"
Aug 2002
Alexandria
2,467 Posts 
I read the PDF. As far as I understood, your idea is sieving values
a^x + c == 0 (mod p) as by letting x vary. From the definition of the order of an element of a finite group, we immediately have with o = ord_p (a) a^(x % o) * a^(k*o) + c = 0 (mod p) a^(x % o) + c = 0 (mod p) Clearly the sequence of values as x varies is periodic with period o. By Lagrange's theorem, op1, and the period p1 is what you discovered. This is, however, not neccessarily the smallest period for given a, p, so your proposed method is not fully generalised yet. Unfortunately, this method is neither new nor very useful for factoring. The problem of finding an x where the congurnce holds for arbitrary a, c and large p is believed to be very hard  it is the socalled discrete logarithm problem that is one of the fundamental problems used for asymmetric cryptosystems because no efficient algorithm is known to solve it (except in special cases, afaik). For smaller p, the discrete logarithm can be solved, but the idea isn't very useful for factoring... usually you start with a composite number and want to know which factors divide it, not the other way around. Most of these ideas for potential factoring algorithm are very conveniently explained by finite groups and the order of elements in those. I urge you to study them. If you are only interested in factoring and primality testing, Abelian groups will do  don't bother with permutation groups, normal subgroups etc. which greatly reduces the scope of the subject. Alex 
20050606, 10:13  #10  
May 2004
2^{2}·79 Posts 
Quote:
wd be a large prime number. In the example of the 2.94 billion digit given by me, although the exponent is very small as compared to the number, it is not a trivial number. A.K. Devaraj 

20050606, 10:28  #11  
May 2004
2^{2}×79 Posts 
Quote:
much lesser than 256. However, this is also a matter of luck.We may get a case in which the period of the smallest prime factor is (p1). I must thank you for your tips on the application of group theory; I will take advantage of them. The object of the present thread is not the technique for factorising very large numbers but comparison of the two algorithms in the case of large but not very large numbers. A benchmark test of a sufficiently large random sample of numbers should be taken;the two algorithms be applied ( rho(n) for Pollard's and a suitable programme for mine) and the the time taken noted. Regards. A.K. Devaraj 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Comparison of NFS tools  CRGreathouse  Factoring  3  20180205 14:55 
Devaraj numbers necessary and sufficient condition  devarajkandadai  Number Theory Discussion Group  7  20170923 02:58 
Parallelizing Pollard's P1 Algorithm  AnoNYStudent  Homework Help  26  20111207 00:18 
PFGW vs LLR comparison discussion  henryzz  Conjectures 'R Us  37  20100219 07:42 
Problems with the Pollard Rho Algorithm  theta  Programming  2  20050826 00:26 