![]() |
![]() |
#1 |
Sep 2009
3·5 Posts |
![]()
To make an RSA public key more compact, and at the same time simplify modular reduction, it is possible to choose the composite public modulus n = p.q, of m bits, with the leading b bits equal to 1 (and, for approprietely moderate b, still have p a mostly random prime of about m/2 bits, q prime, and perhaps some lower bound on the highest factor of p-1, p+1, q-1, q+1). In this case, we have n = 2[I]m[/I]-s with s of m-b bits.
Question: how big can we select b without making n significantly easier to factor with some form of NFS than for arbitrary leading bits? Has the art of GNFS polynomial selection matured to the point that there is a firm answer? TIA, François Grieu |
![]() |
![]() |
![]() |
#2 |
Tribal Bullet
Oct 2004
32×5×79 Posts |
![]()
see this thread starting around post #53 for a reference to the Blomer-May attack. This can find the private exponent when it is much smaller than the RSA modulus, and can also find limited classes of private exponents that are larger than this. It becomes more effective when the factors of the RSA modulus are close together, as they must be in your scheme.
|
![]() |
![]() |
![]() |
#3 |
Oct 2007
10610 Posts |
![]()
I'm not sure this is what you want, but the Rabin-Williams based signature scheme proposed by Dan Bernstein does shorten the public key by fixing a public constant c and representing the public key as (pq mod 2^512) + c. He claims this does not affect the strength of the scheme for a random c --- not sure about c = 111..11. Anyway, check this link:
http://cr.yp.to/sigs/key.html |
![]() |
![]() |
![]() |
#4 | |
Sep 2009
11112 Posts |
![]() Quote:
François Grieu |
|
![]() |
![]() |
![]() |
#5 | |
Sep 2009
3·5 Posts |
![]() Quote:
On the other hand, there is an advantage to fixing high bits of n to many ones (or 1 followed by many zeroes): it simplifies modular reduction. There are comments on this practice in some ISO standards, and it is possibly still used (I have no currently deployed example in mind). Hence the question: starting with how many identical fixed high bits does it significantly help NFS factoring as currently practiced, by allowing selection of a particularly good polynomial? François Grieu Last fiddled with by fgrieu on 2009-09-23 at 06:29 |
|
![]() |
![]() |
![]() |
#6 | |
"Bob Silverman"
Nov 2003
North of Boston
22×1,877 Posts |
![]() Quote:
an integer N easier to factor by NFS is if it has low Hamming weight. |
|
![]() |
![]() |
![]() |
#7 | |
Sep 2009
1510 Posts |
![]() Quote:
François Grieu |
|
![]() |
![]() |
![]() |
#8 | |
"Bob Silverman"
Nov 2003
North of Boston
1D5416 Posts |
![]() Quote:
Such numbers tend to be easier by NFS. |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
nextprime function is spooked by leading zero. | shortcipher | YAFU | 5 | 2018-03-27 13:59 |
Religious extremism leading to conflict and slaughter: how to handle this? | Brian-E | Soap Box | 15 | 2014-05-16 03:12 |
Leading Edge | Primeinator | Information & Answers | 9 | 2010-06-25 07:36 |
Factorization of a 768-bit RSA modulus | akruppa | Factoring | 53 | 2010-03-12 13:50 |
Graph of leading edge of LL testing (and double-checking) over time | GP2 | Data | 10 | 2003-11-17 14:55 |