mersenneforum.org  

Go Back   mersenneforum.org > Extra Stuff > Hobbies

Reply
 
Thread Tools
Old 2009-03-21, 20:19   #34
Raman
Noodles
 
Raman's Avatar
 
"Mr. Tuch"
Dec 2007
Chennai, India

3·419 Posts
Default Discrete Logarithms.

In the mean time, I tried to implement the various algorithms that are thus being available up so for solving the discrete logarithm problem only.
Given g, t, p, find out the least positive value of l such that
gl = t (mod p).

First, thus I tried up so within the brute force algorithm only. Then, within the Pollard's Rho algorithm for discrete logarithms, I realized that it could be much faster than the trial and then the error method.
Let x = tagb (mod phi(p))
Iterate x, a, b by using a pseudo random function:
If two values of x that are same mod p, but are distinct mod n, then
ta[sub]1[/sub]-a[sub]2[/sub] = gb[sub]2[/sub]-b[sub]1[/sub]
gl(a[sub]1[/sub]-a[sub]2[/sub]) = gb[sub]2[/sub]-b[sub]1[/sub]
l = (b2-b1)/(a1-a2)

Function: Starting up with x=0, a=1, b=1:
Iterate according to:
x = tx mod phi(p), a++ (mod p), if x < p/3
x = x2 mod phi(p), a = 2a mod p, b = 2b mod p, if p/3 < x < 2p/3
x = gx mod phi(p), b++ (mod p), if x > 2p/3

Again the tortoise-hare algorithm is thus being used up so only for cycle finding (Floyd's algorithm) as it was in the case of so for factoring with the Pollard's Rho algorithm, or the method:
Iterate x1 once, x2 twice, and comparing ith iteration of the first sequence with the 2ith iteration in the second sequence.

Next is Pollard's Lambda, well suited for large collection of machines, so not implemented at all here, each machine chooses up a random value of r at the beginning, and iterates according to some pseudo random function. Then, each machine checks for a collision...

What I have implemented next is the Baby Steps, Giant Steps algorithm, that is... writing l as l0+l1b, where b = ceiling(sqrt(N)).
gl = t mod p, so gl[sub]0[/sub]+l[sub]1[/sub]b = t mod p, so we have that gl[sub]1[/sub]b = thl[sub]0[/sub] mod p, where h = g-1 mod p.

So, thus finally we have one array containing up within gib for i = 0 to b-1 and then another array for thj for j = 0 to b-1. We apply some quick sorting algorithm, and then search for a collision. Then, we calculate l = ib+j.

Finally, I have implemented the Pohlig-Hellman algorithm, with a hybrid mixture of brute force discrete logarithm method and then the Pollard's Rho algorithm working in for the subgroups.

Index calculus method is more like the process of quadratic sieve, collecting relations, that are smooth powers of g, to the factor base, and then later on, merging these relations by using linear algebra, to get a vector sum, which is equal to a known smooth power of t, and then finally thus calculating up so within the value of l, only really indeed, thus, of course, for ever. But, it is not at all feasible so to implement this algorithm, or the method, by now itself, for the time being, it is up so, thus, only.
Attached Files
File Type: zip discrete logarithms.zip (8.1 KB, 176 views)

Last fiddled with by Raman on 2009-03-21 at 20:40 Reason: The Discrete Logarithms Problem.
Raman is offline   Reply With Quote
Old 2009-03-22, 00:09   #35
alpertron
 
alpertron's Avatar
 
Aug 2002
Buenos Aires, Argentina

2·683 Posts
Default

How does it compare against http://www.alpertron.com.ar/DILOG.HTM ?
alpertron is offline   Reply With Quote
Old 2009-03-22, 05:20   #36
Raman
Noodles
 
Raman's Avatar
 
"Mr. Tuch"
Dec 2007
Chennai, India

3·419 Posts
Default

Quote:
Originally Posted by alpertron View Post
How does it compare against http://www.alpertron.com.ar/DILOG.HTM ?
I am not an expert. I am doing these things rapidly for my college project work. I think that implementing these things will be a good project, right?

I came to know about the Pohlig-Hellman algorithm only by seeing your applet. Before that, I was about to implement Pollard's Rho for Discrete Logarithms and Baby Steps, Giant Steps only.

Regarding performance, yours should be much better, since I assume that you would have been working with optimizations on it for years. In your applet, you have written that it works in a reasonable amount of time if the largest prime factor of phi(p) is less than 1017. For my implementation, even with Pollard's Rho in its place, for finding out the discrete logarithm in the subgroup, this factor would be about, around 1014?

I have tested my Pohlig-Hellman program works well only if the modulo is prime. Otherwise, Pollard's Rho and Baby Steps, Giant Steps, are well suited, and their performance for larger modulo is slower, even if phi(p) is smooth, where p is composite.

Your applet does ECM and SIQS for every larger number values for p and phi(p), who can do these things within a shorter span or period of time? My implementation only does ordinary brute force trial division and then, after all Pollard's Rho for factoring up so, p and then phi(p), thus, finally.

Last fiddled with by Raman on 2009-03-22 at 05:24
Raman is offline   Reply With Quote
Old 2009-03-22, 19:06   #37
alpertron
 
alpertron's Avatar
 
Aug 2002
Buenos Aires, Argentina

101010101102 Posts
Default

Factorization is just a few percent of the time needed to find the discrete logarithm. I used the code of my factoring applet because it was ready at this time I developed the discrete logarithm applet. Notice that the Rho code runs faster if you use Brent's variation.
alpertron is offline   Reply With Quote
Old 2009-03-22, 19:35   #38
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

23·11·73 Posts
Default

I've never managed to implement the index-calculus dlog usefully; I can't see how it avoids requiring a truly enormous factor base or spending nearly all of its time in unproductive factorisations, because I don't see how you can use a sieve - there's no nice linear pattern to the Q with (3^Q mod p) == 0 mod 7. I'm clearly missing something obvious: where should I be looking?
fivemack is offline   Reply With Quote
Old 2009-03-22, 20:00   #39
alpertron
 
alpertron's Avatar
 
Aug 2002
Buenos Aires, Argentina

2×683 Posts
Default

Obviously I've never programmed the index-calculus method. But from what I read the main problem is the matrix to solve because its elements are numbers with the same bit length than the base of the discrete logarithm, and not just bits whose values are zero or one.

The sieve is not difficult at all, if you programmed the quadratic sieve before.
alpertron is offline   Reply With Quote
Old 2009-03-22, 20:14   #40
Raman
Noodles
 
Raman's Avatar
 
"Mr. Tuch"
Dec 2007
Chennai, India

3×419 Posts
Default

Quote:
Originally Posted by alpertron View Post
Factorization is just a few percent of the time needed to find the discrete logarithm. I used the code of my factoring applet because it was ready at this time I developed the discrete logarithm applet.
Yes, that the factorization takes some of the time, and the discrete logarithm is immediate if the factorization of phi(p) is smooth. Else if phi(p) has a large prime factor, say q, then it takes time to determine another discrete logarithm within the subgroup of q-1 elements (q is prime, so we have that phi(q) = q-1).

It takes time to determine x for which gx(phi(p))/q = t(phi(p))/q. Once we have various congruences for l = xi mod qia, where the various qi are the different prime factors of phi(p), then we can solve them by using the Chinese Remainder Theorem.

Quote:
Originally Posted by alpertron View Post
Notice that the Rho code runs faster if you use Brent's variation.
Actually, the first time I implemented the Brent's version, it was slower than the Floyd's cycle finding method, so that I will have to try up again... Brent's variation compares the iteration 2i within the ranges from 2i+1 to 2i+1 of the other sequence, right up that way, for the information of the others, the everyone else.
Raman is offline   Reply With Quote
Old 2009-03-22, 20:31   #41
alpertron
 
alpertron's Avatar
 
Aug 2002
Buenos Aires, Argentina

2×683 Posts
Default

Quote:
Originally Posted by Raman View Post
Actually, the first time I implemented the Brent's version, it was slower than the Floyd's cycle finding method, so that I will have to try up again... Brent's variation compares the iteration 2i within the ranges from 2i+1 to 2i+1 of the other sequence, right up that way, for the information of the others, the everyone else.
There are more cycles in the Brent's variation, but each cycle is a lot faster than in the Floyd's case.
alpertron is offline   Reply With Quote
Old 2009-03-22, 23:30   #42
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

11001000110002 Posts
Default

I couldn't immediately figure out where the sieve was for index calculus, but page 15 of http://www.cs.toronto.edu/~cvs/dlog/research_paper.pdf was informative:

take H=ceiling(sqrt p) with H^2=p+j, pick some c1.

Then (H+c1)(H+c2) = H^2 + (c1+c2)H + c1c2 == j + (c1+c2)H + c1c2 mod p, which you can now sieve over c2.

Each sieve hit gives you log(H+c1) + log(H+c2) - {sum ei log pi} = 0, and if c1 and c2 aren't much bigger than sqrt(factor base size) then the matrix doesn't end up impossibly skinny.

(say p=10^60, factor base is the 9592 primes to 10^5 so p(thing around sqrt(p) splits) is about 1/50000. we want C^2/50000 > C+9592 which means C~60000; and that's quite a big input matrix, you have to do the filtering quite carefully)
fivemack is offline   Reply With Quote
Old 2009-04-13, 19:49   #43
Raman
Noodles
 
Raman's Avatar
 
"Mr. Tuch"
Dec 2007
Chennai, India

3·419 Posts
Default Primality Tests

I also implemented some primality tests, that are quite simple:
(Though it should give very poor performance, I give importance to the algorithm alone, improvements can be done later on)

Fermat's Little Theorem: PRP
If a^{p-1} = 1\ (mod\ p), then p may be prime.

Rabin Miller's SPRP test:
If a^{\frac{p-1}{2^r}} = 1\ (mod\ p), then a^{\frac{p-1}{2^{r+1}}} = \pm 1\ (mod\ p)

Pepin's Test for Fermat numbers:
F_n = 2^{2^n}+1 is prime if and only if 3^{\frac{F_n-1}{2}} = -1\ (mod\ F_n), n \ge 1

Proth's Theorem for Sierpinski numbers:
N = k.2^n+1
k < 2^n, a^{\frac{N-1}{2}} = -1\ (mod\ N) for some a > 2.

Or any partial factorization of N-1 is being known up so, N = F.R, F > \sqrt N, F is completely factored. Then N is prime if and only if a^{N-1} = 1\ (mod\ N), a^{\frac{N-1}{q}} \ne 1\ (mod\ N) for every q dividing F.

Lucas Lehmer Test for Mersenne Numbers:
Lucas Sequence: S_1 = 4
For i = 2 to p-1, S_n = S_{n-1}^2 - 2\ (mod\ 2^p-1)
2^p-1 is prime if and only if S_{p-1} = 0.

Lucas Lehmer Riesel Test Extension for Riesel Numbers:
N = k.2^n-1, k < 2^n
If k=1, S_1 = 4 holds for all odd n
and S_1 = 3 holds for all n = 3\ (mod\ 4)

If k=3, S_1 = 5778 holds for all n = 0, 3\ (mod\ 4)

If k=6x \pm 1, x belongs to \mathbb{Z}, then S_1 = (2+\sqrt 3)^k + (2-\sqrt 3)^k
= the kth Lucas Number with P = 4, Q = 1.

Otherwise, when k is a multiple of 3, the starting value S_1 should be chosen carefully.
Try v values starting from 3, and let D be the square free part of v^2-4.
Let v = \alpha + \alpha^{-1}, and so v satisfies the equation
\alpha^2 - v \alpha + 1 = 0

a and r are gotten from the equations
\alpha = \frac{(a+b \sqrt D)^2}{r} and r = |a^2 - b^2D|, b = 1.
Let h be equal to
h = \sqrt{\frac{v^2-4}{D}}
Then, \alpha = \frac{v+\sqrt{v^2-4}}{2} = \frac{v+h \sqrt D}{2}

This corresponds to \frac{(a^2+b^2D) + 2ab \sqrt D}{r}
So that we can now easily solve for a & r.
These should satisfy the equations for the Legendre Symbols.
\left( \frac{D}{N} \right) = -1
\left( \frac{r}{N} \right) \frac{a^2 - b^2D}{r} = -1

Then the starting value S_1 is taken to be \alpha^k + \alpha^{-k}, that is, the kth Lucas number with P = v, Q = 1, satisfying the Lucas equation x^2 - Px + Q = 0, i.e.
L_0 = 2
L_1 = P
L_n = PL_{n-1} - QL_{n-2}, n \ge 2
L_{2n} = L_n^2 - 2Q^n
L_{2n-1} = L_nL_{n-1} - PQ^{n-1}
Attached Files
File Type: zip Prime.zip (5.3 KB, 186 views)

Last fiddled with by Raman on 2009-04-13 at 20:25
Raman is offline   Reply With Quote
Old 2009-04-13, 20:49   #44
Raman
Noodles
 
Raman's Avatar
 
"Mr. Tuch"
Dec 2007
Chennai, India

100111010012 Posts
Arrow How is Quadratic Sieve better to be implemented?

How is the ordinary quadratic sieve better to be implemented?
We start to sieve (x+b)^2 - N for x
with b = \lceil \sqrt N \rceil starting with x = 0.

We can also sieve for negative values of x, provided that
-1 should be taken as a factor base element, and that a prime p
divides (x+b)^2 - N only for those values of x, that are
congruent to x \equiv \pm \sqrt N - b\ (mod\ p)
If this implementation is successful, then I will go ahead to implement
MPQS and SIQS, with polynomials as (ax+b)^2 - N with
a dividing b^2 - N

Now tell me which technique is better to implement the sieve for ordinary QS?

1) For every block of say, 100000 values to sieve for (x+b)^2 - N, you calculate its value for each of the 100000 values, and then for a prime dividing that value, you divide by that prime, at regular intervals, until that prime no longer divides the residue. So, now that you store the original value and the final residue. You consider it as a smooth relation if the residue is 1, or less than the factor base prime limits, or the upper bound for the large prime limits.

2) For every prime dividing a value of (x+b)^2 - N, for any given value of x, at periodic intervals, starting up with 0, you add up the logarithm of that prime to the added up log value, and then find out which relations have maximum accumulated log values, that is, flag off the smooth relations.

The disadvantage within this approach is that, it do not captures the prime powers, (any idea how the prime powers should be captured?) and thus not all the relations are filtered, that are being done so by using the first method. Actually, not even half of it are being conquered up so, only thus.

By the way, how much high must the optimal log value should be, so as to flag any given relation as a possible smooth relation? Scott Contini's paper mentions it as about to be only around 2x \sqrt N, but this is not giving up so, any provision at all for the possibilities of any prime powers or any large primes, so that only a very few fraction of the typical relations get filtered up so, by using this method.

Thus, you better please give me up so an idea of how much high does the optimal log value needs to be, in order that, it can be flagged up off so, as a smooth relation.
Attached Files
File Type: txt QS.txt (15.7 KB, 264 views)

Last fiddled with by Raman on 2009-04-13 at 20:53
Raman is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
General factoring algorithms not using sieving mickfrancis Factoring 21 2016-02-22 19:38
Overview of Factoring Algorithms ThiloHarich Factoring 0 2007-09-02 20:32
design factoring algorithms koders333 Factoring 14 2006-01-25 14:08
factoring algorithms are patented? koders333 Factoring 1 2006-01-19 20:04
Implementing algorithms, did I do this right? ShiningArcanine Programming 18 2005-12-29 21:47

All times are UTC. The time now is 23:18.


Fri Aug 6 23:18:22 UTC 2021 up 14 days, 17:47, 1 user, load averages: 3.48, 3.96, 4.01

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.