![]() |
|
|
#34 |
|
Noodles
"Mr. Tuch"
Dec 2007
Chennai, India
3·419 Posts |
In the mean time, I tried to implement the various algorithms that are thus being available up so for solving the discrete logarithm problem only.
Given g, t, p, find out the least positive value of l such that gl = t (mod p). First, thus I tried up so within the brute force algorithm only. Then, within the Pollard's Rho algorithm for discrete logarithms, I realized that it could be much faster than the trial and then the error method. Let x = tagb (mod phi(p)) Iterate x, a, b by using a pseudo random function: If two values of x that are same mod p, but are distinct mod n, then ta[sub]1[/sub]-a[sub]2[/sub] = gb[sub]2[/sub]-b[sub]1[/sub] gl(a[sub]1[/sub]-a[sub]2[/sub]) = gb[sub]2[/sub]-b[sub]1[/sub] l = (b2-b1)/(a1-a2) Function: Starting up with x=0, a=1, b=1: Iterate according to: x = tx mod phi(p), a++ (mod p), if x < p/3 x = x2 mod phi(p), a = 2a mod p, b = 2b mod p, if p/3 < x < 2p/3 x = gx mod phi(p), b++ (mod p), if x > 2p/3 Again the tortoise-hare algorithm is thus being used up so only for cycle finding (Floyd's algorithm) as it was in the case of so for factoring with the Pollard's Rho algorithm, or the method: Iterate x1 once, x2 twice, and comparing ith iteration of the first sequence with the 2ith iteration in the second sequence. Next is Pollard's Lambda, well suited for large collection of machines, so not implemented at all here, each machine chooses up a random value of r at the beginning, and iterates according to some pseudo random function. Then, each machine checks for a collision... What I have implemented next is the Baby Steps, Giant Steps algorithm, that is... writing l as l0+l1b, where b = ceiling(sqrt(N)). gl = t mod p, so gl[sub]0[/sub]+l[sub]1[/sub]b = t mod p, so we have that gl[sub]1[/sub]b = thl[sub]0[/sub] mod p, where h = g-1 mod p. So, thus finally we have one array containing up within gib for i = 0 to b-1 and then another array for thj for j = 0 to b-1. We apply some quick sorting algorithm, and then search for a collision. Then, we calculate l = ib+j. Finally, I have implemented the Pohlig-Hellman algorithm, with a hybrid mixture of brute force discrete logarithm method and then the Pollard's Rho algorithm working in for the subgroups. Index calculus method is more like the process of quadratic sieve, collecting relations, that are smooth powers of g, to the factor base, and then later on, merging these relations by using linear algebra, to get a vector sum, which is equal to a known smooth power of t, and then finally thus calculating up so within the value of l, only really indeed, thus, of course, for ever. But, it is not at all feasible so to implement this algorithm, or the method, by now itself, for the time being, it is up so, thus, only. Last fiddled with by Raman on 2009-03-21 at 20:40 Reason: The Discrete Logarithms Problem. |
|
|
|
|
|
#35 |
|
Aug 2002
Buenos Aires, Argentina
2·683 Posts |
How does it compare against http://www.alpertron.com.ar/DILOG.HTM ?
|
|
|
|
|
|
#36 | |
|
Noodles
"Mr. Tuch"
Dec 2007
Chennai, India
3·419 Posts |
Quote:
I came to know about the Pohlig-Hellman algorithm only by seeing your applet. Before that, I was about to implement Pollard's Rho for Discrete Logarithms and Baby Steps, Giant Steps only. Regarding performance, yours should be much better, since I assume that you would have been working with optimizations on it for years. In your applet, you have written that it works in a reasonable amount of time if the largest prime factor of phi(p) is less than 1017. For my implementation, even with Pollard's Rho in its place, for finding out the discrete logarithm in the subgroup, this factor would be about, around 1014? I have tested my Pohlig-Hellman program works well only if the modulo is prime. Otherwise, Pollard's Rho and Baby Steps, Giant Steps, are well suited, and their performance for larger modulo is slower, even if phi(p) is smooth, where p is composite. Your applet does ECM and SIQS for every larger number values for p and phi(p), who can do these things within a shorter span or period of time? My implementation only does ordinary brute force trial division and then, after all Pollard's Rho for factoring up so, p and then phi(p), thus, finally. Last fiddled with by Raman on 2009-03-22 at 05:24 |
|
|
|
|
|
|
#37 |
|
Aug 2002
Buenos Aires, Argentina
101010101102 Posts |
Factorization is just a few percent of the time needed to find the discrete logarithm. I used the code of my factoring applet because it was ready at this time I developed the discrete logarithm applet. Notice that the Rho code runs faster if you use Brent's variation.
|
|
|
|
|
|
#38 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
23·11·73 Posts |
I've never managed to implement the index-calculus dlog usefully; I can't see how it avoids requiring a truly enormous factor base or spending nearly all of its time in unproductive factorisations, because I don't see how you can use a sieve - there's no nice linear pattern to the Q with (3^Q mod p) == 0 mod 7. I'm clearly missing something obvious: where should I be looking?
|
|
|
|
|
|
#39 |
|
Aug 2002
Buenos Aires, Argentina
2×683 Posts |
Obviously I've never programmed the index-calculus method. But from what I read the main problem is the matrix to solve because its elements are numbers with the same bit length than the base of the discrete logarithm, and not just bits whose values are zero or one.
The sieve is not difficult at all, if you programmed the quadratic sieve before. |
|
|
|
|
|
#40 | |
|
Noodles
"Mr. Tuch"
Dec 2007
Chennai, India
3×419 Posts |
Quote:
It takes time to determine x for which gx(phi(p))/q = t(phi(p))/q. Once we have various congruences for l = xi mod qia, where the various qi are the different prime factors of phi(p), then we can solve them by using the Chinese Remainder Theorem. Actually, the first time I implemented the Brent's version, it was slower than the Floyd's cycle finding method, so that I will have to try up again... Brent's variation compares the iteration 2i within the ranges from 2i+1 to 2i+1 of the other sequence, right up that way, for the information of the others, the everyone else. |
|
|
|
|
|
|
#41 | |
|
Aug 2002
Buenos Aires, Argentina
2×683 Posts |
Quote:
|
|
|
|
|
|
|
#42 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
11001000110002 Posts |
I couldn't immediately figure out where the sieve was for index calculus, but page 15 of http://www.cs.toronto.edu/~cvs/dlog/research_paper.pdf was informative:
take H=ceiling(sqrt p) with H^2=p+j, pick some c1. Then (H+c1)(H+c2) = H^2 + (c1+c2)H + c1c2 == j + (c1+c2)H + c1c2 mod p, which you can now sieve over c2. Each sieve hit gives you log(H+c1) + log(H+c2) - {sum ei log pi} = 0, and if c1 and c2 aren't much bigger than sqrt(factor base size) then the matrix doesn't end up impossibly skinny. (say p=10^60, factor base is the 9592 primes to 10^5 so p(thing around sqrt(p) splits) is about 1/50000. we want C^2/50000 > C+9592 which means C~60000; and that's quite a big input matrix, you have to do the filtering quite carefully) |
|
|
|
|
|
#43 |
|
Noodles
"Mr. Tuch"
Dec 2007
Chennai, India
3·419 Posts |
I also implemented some primality tests, that are quite simple:
(Though it should give very poor performance, I give importance to the algorithm alone, improvements can be done later on) Fermat's Little Theorem: PRP If Rabin Miller's SPRP test: If Pepin's Test for Fermat numbers: Proth's Theorem for Sierpinski numbers: Or any partial factorization of N-1 is being known up so, N = F.R, Lucas Lehmer Test for Mersenne Numbers: Lucas Sequence: For i = 2 to p-1, Lucas Lehmer Riesel Test Extension for Riesel Numbers: If k=1, and If k=3, If = the kth Lucas Number with P = 4, Q = 1. Otherwise, when k is a multiple of 3, the starting value Try v values starting from 3, and let D be the square free part of Let a and r are gotten from the equations Let h be equal to Then, This corresponds to So that we can now easily solve for a & r. These should satisfy the equations for the Legendre Symbols. Then the starting value Last fiddled with by Raman on 2009-04-13 at 20:25 |
|
|
|
|
|
#44 |
|
Noodles
"Mr. Tuch"
Dec 2007
Chennai, India
100111010012 Posts |
How is the ordinary quadratic sieve better to be implemented?
We start to sieve with We can also sieve for negative values of x, provided that -1 should be taken as a factor base element, and that a prime p divides congruent to If this implementation is successful, then I will go ahead to implement MPQS and SIQS, with polynomials as a dividing Now tell me which technique is better to implement the sieve for ordinary QS? 1) For every block of say, 100000 values to sieve for 2) For every prime dividing a value of The disadvantage within this approach is that, it do not captures the prime powers, (any idea how the prime powers should be captured?) and thus not all the relations are filtered, that are being done so by using the first method. Actually, not even half of it are being conquered up so, only thus. By the way, how much high must the optimal log value should be, so as to flag any given relation as a possible smooth relation? Scott Contini's paper mentions it as about to be only around Thus, you better please give me up so an idea of how much high does the optimal log value needs to be, in order that, it can be flagged up off so, as a smooth relation. Last fiddled with by Raman on 2009-04-13 at 20:53 |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| General factoring algorithms not using sieving | mickfrancis | Factoring | 21 | 2016-02-22 19:38 |
| Overview of Factoring Algorithms | ThiloHarich | Factoring | 0 | 2007-09-02 20:32 |
| design factoring algorithms | koders333 | Factoring | 14 | 2006-01-25 14:08 |
| factoring algorithms are patented? | koders333 | Factoring | 1 | 2006-01-19 20:04 |
| Implementing algorithms, did I do this right? | ShiningArcanine | Programming | 18 | 2005-12-29 21:47 |