mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Factoring

Reply
 
Thread Tools
Old 2009-09-21, 22:32   #1
fgrieu
 
Sep 2009

3×5 Posts
Default Fixed leading bits in RSA modulus, vs NFS

To make an RSA public key more compact, and at the same time simplify modular reduction, it is possible to choose the composite public modulus n = p.q, of m bits, with the leading b bits equal to 1 (and, for approprietely moderate b, still have p a mostly random prime of about m/2 bits, q prime, and perhaps some lower bound on the highest factor of p-1, p+1, q-1, q+1). In this case, we have n = 2[I]m[/I]-s with s of m-b bits.

Question: how big can we select b without making n significantly easier to factor with some form of NFS than for arbitrary leading bits? Has the art of GNFS polynomial selection matured to the point that there is a firm answer?

TIA,
François Grieu
fgrieu is offline   Reply With Quote
Old 2009-09-21, 22:57   #2
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

DD716 Posts
Default

see this thread starting around post #53 for a reference to the Blomer-May attack. This can find the private exponent when it is much smaller than the RSA modulus, and can also find limited classes of private exponents that are larger than this. It becomes more effective when the factors of the RSA modulus are close together, as they must be in your scheme.
jasonp is offline   Reply With Quote
Old 2009-09-21, 23:33   #3
Robert Holmes
 
Robert Holmes's Avatar
 
Oct 2007

2·53 Posts
Default

I'm not sure this is what you want, but the Rabin-Williams based signature scheme proposed by Dan Bernstein does shorten the public key by fixing a public constant c and representing the public key as (pq mod 2^512) + c. He claims this does not affect the strength of the scheme for a random c --- not sure about c = 111..11. Anyway, check this link:

http://cr.yp.to/sigs/key.html
Robert Holmes is offline   Reply With Quote
Old 2009-09-22, 05:13   #4
fgrieu
 
Sep 2009

3×5 Posts
Default

Quote:
Originally Posted by jasonp View Post
(..) the Blomer-May attack (..) can find the private exponent when it is much smaller than the RSA modulus, and can also find limited classes of private exponents that are larger than this. It becomes more effective when the factors of the RSA modulus are close together, as they must be in your scheme.
I am fixing bits in the public modulus, not private exponent. And the scheme does not imply p and q close together. For even m, it does imply that we choose p and q with (at least) a 1-bit difference in length, otherwise p and q would indeed be close together and Fermat factoring would be a possibility.

François Grieu
fgrieu is offline   Reply With Quote
Old 2009-09-23, 06:14   #5
fgrieu
 
Sep 2009

3×5 Posts
Default

Quote:
Originally Posted by Robert Holmes View Post
(..) the Rabin-Williams based signature scheme proposed by Dan Bernstein does shorten the public key by fixing a public constant c and representing the public key as (pq mod 2^512) + c. He claims this does not affect the strength of the scheme for a random c --- not sure about c = 111..11 (..)
Yes; Bernstein gives credit to Coppersmith for the technique. A great feature is the strong argument that their system is not seriously weakened by fixing the high bits and choosing p q (in their certain manner): when c varies, the probability that n can be factored is close to that with traditional RSA; thus by picking c at random, we haves negligible odds that it makes n much easier to factor.

On the other hand, there is an advantage to fixing high bits of n to many ones (or 1 followed by many zeroes): it simplifies modular reduction. There are comments on this practice in some ISO standards, and it is possibly still used (I have no currently deployed example in mind).

Hence the question: starting with how many identical fixed high bits does it significantly help NFS factoring as currently practiced, by allowing selection of a particularly good polynomial?

François Grieu

Last fiddled with by fgrieu on 2009-09-23 at 06:29
fgrieu is offline   Reply With Quote
Old 2009-09-23, 09:47   #6
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

746010 Posts
Default

Quote:
Originally Posted by fgrieu View Post
Yes; Bernstein gives credit to Coppersmith for the technique. A great feature is the strong argument that their system is not seriously weakened by fixing the high bits and choosing p q (in their certain manner): when c varies, the probability that n can be factored is close to that with traditional RSA; thus by picking c at random, we haves negligible odds that it makes n much easier to factor.

On the other hand, there is an advantage to fixing high bits of n to many ones (or 1 followed by many zeroes): it simplifies modular reduction. There are comments on this practice in some ISO standards, and it is possibly still used (I have no currently deployed example in mind).

Hence the question: starting with how many identical fixed high bits does it significantly help NFS factoring as currently practiced, by allowing selection of a particularly good polynomial?

François Grieu
Not as far as anyone knows. The only known thing that makes
an integer N easier to factor by NFS is if it has low Hamming weight.
R.D. Silverman is offline   Reply With Quote
Old 2009-09-23, 10:12   #7
fgrieu
 
Sep 2009

3·5 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
(..) The only known thing that makes
an integer N easier to factor by NFS is if it has low Hamming weight.
Hi Bob. The condition must be more complex than "low Hamming weight". I'm in the impression that for s small enough (how small?), (S)NFS works well for N=2[I]m[/I]-s, which has high Hamming weight; and for N=3[I]m[/I]-s, which has unspectacular Hamming weight (unless one extends the nothion of Hamming weight to another base).

François Grieu
fgrieu is offline   Reply With Quote
Old 2009-09-23, 11:45   #8
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

22×5×373 Posts
Default

Quote:
Originally Posted by fgrieu View Post
Hi Bob. The condition must be more complex than "low Hamming weight". I'm in the impression that for s small enough (how small?), (S)NFS works well for N=2[I]m[/I]-s, which has high Hamming weight; and for N=3[I]m[/I]-s, which has unspectacular Hamming weight (unless one extends the nothion of Hamming weight to another base).

François Grieu
Well, yes. I should have said low Hamming weight in some base..

Such numbers tend to be easier by NFS.
R.D. Silverman is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
nextprime function is spooked by leading zero. shortcipher YAFU 5 2018-03-27 13:59
Religious extremism leading to conflict and slaughter: how to handle this? Brian-E Soap Box 15 2014-05-16 03:12
Leading Edge Primeinator Information & Answers 9 2010-06-25 07:36
Factorization of a 768-bit RSA modulus akruppa Factoring 53 2010-03-12 13:50
Graph of leading edge of LL testing (and double-checking) over time GP2 Data 10 2003-11-17 14:55

All times are UTC. The time now is 09:16.


Tue Dec 7 09:16:28 UTC 2021 up 137 days, 3:45, 0 users, load averages: 1.56, 1.47, 1.46

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.