![]() |
|
|
#1 |
|
Nov 2005
6516 Posts |
On the size of values to test by Factorization Algorithms
based on the congruences defined by Kraitchik such as the Quadratic Sieve. I think we can shrink down the size of numbers to evaluate to get enough congruences. This might lead to a new algorithm which is faster then the SIQS. If we look at Quadratic sieve we have a value N to factorize and a Sieving interval M we have values x_i = ceil (sqrt (N) + i) , 0 <= i <= M and q (x_i) = x_i^2 - N <= (sqrt (N) + i + 1)^2 - N q(x_{i-1}) <= (sqrt (N) + i)^2 - N<= N + 2i + 1 - N <= 2i +1 q(x) = O (M * sqrt (N)) In SIQS we spread M over s polynomials we choose a to be s * sqrt (N) / M such that the maximal Value of q(x) < N. So the maximal value of q(x) is N/a = N/sqrt (N) *M/s = sqrt (N) * M / s. Has anybody some value for s depended on M here? It depends on the speed of switching between polynomials. But s can not be more then sqrt (M) because otherwise we could not sieve anymore (the sieve interval is then less then the factors in the factor base). Lets take a look at CFRAC: Here we take some values x_i = ceil (i * sqrt (N)) , 0 <= i <= M in fact they only take some specific values of these. we define q (x_i) = x_i^2 mod N x_i^2 <= (sqrt (N)*i + 1)^2 = N*i^2 + 2 sqrt (N) * i + 1 q (x_i) <= 2 sqrt (N) * i + 1 q (x) = O (M * sqrt (N)) If we look at values q(x_i) which will be in the near of N*i we get a chance that they will be small after taking them mod N x_i = ceil (sqrt (N*i)) , 0 <= i <= M again we define q (x_i) = x_i^2 mod N x_i^2 <= (sqrt (N*i) + 1)^2 = N*i + 2 sqrt (N*i) + 1 q (x_i) <= 2 sqrt (N*i) + 1 q (x) = O (sqrt (M) * sqrt (N)) Now look at the following values: sqrt (N), sqrt (N) + 1 , ... , sqrt (N) + M^(2/5) sqrt (2N), sqrt (2N) + 1 , ... , sqrt (2N) + M^(2/5)/sqrt (2) ... sqrt (i*N), sqrt (i * N) + 1 , ... , sqrt (i * N) + M^(2/5)/sqrt (i) ... sqrt (M^(4/5)N) + 1 which were O(M) values the corresponding values q(x) is bounded by q (sqrt (M*i*N) * M^(2/5)/sqrt (i)) = O (sqrt (N)*M^(2/5)) So we boiled down the size of the values to test from O (sqrt (N) * M) to O (sqrt (N) * M^(2/5)). Of coarse we con not sieve here, since the sieve interval is smaller then the factors of the factor base itself. Since q(x) changed from x^2 - N to x^2 mod N factors which do not were in the Factor base might occur now. So the Factor base itself changes. But we can still use the gcd approach of daniel bernstein which should do it even faster as sieving (if we have a fast multiplication algorithm). So I would expect to have a Factoring Algorithm here which might be faster then SIQS. What do you think? PS: I added a snapshot of a sample on the number 87463, which I also considered in http://de.wikipedia.org/wiki/Quadratisches_Sieb Last fiddled with by ThiloHarich on 2009-01-28 at 17:03 |
|
|
|
|
|
#3 |
|
Nov 2005
101 Posts |
At a first look I did not find/understand the Algorithm in this paper.
But thanks for the link, I have to take a deeper look at it. I have implemented the Bernstein Algorithm a year ago in JAVA, but never tried it. But I only had a improved Karatsuba version in place, no Schönhage Strassen for multiplication. I want to find out first if my approach works at all, then applying the Bernstein Algorithm to it. |
|
|
|
|
|
#4 |
|
Nov 2005
10110 Posts |
Has anybody tried the SIQS without sieving?
Has anybody replaced the sieving stage with the gcd/fast multiplication approach of daniel bernstein? When we need so sieving, we can forget about calculating the inverses. Then we can switch fast between the polynomials. |
|
|
|
|
|
#5 |
|
Nov 2005
11001012 Posts |
Without sieving the Factoring algorithm is rather slow, even if the considered rather small values.
, so forget about this idea.I have done some tests with the remainder trees of Bernstein. Here you can find a short description of it here http://books.google.de/books?id=RbEz...ainder+tree%22. But unfortunately I only have a Java Algorithm, which uses a schoolbook implementation (BigInteger) for multiplication and mod , which has O(n^2) running time. So it was clearly slower then the normal sieving/trial factoring approach. I tried it on a factorbase of size 4600. I also tried it on a smaller lower part of it with size 300, instead of applying trial division to this part. Since schoolbook multiplication is the fastest on a factorbase with size 300 (1700 bit), I am a little disappointed. But 1700 bits means an int array of size 50, so the square of this is much bigger then 300 trial divisions. At which point is the remainder tree algorithm faster then trial division or other factoring approaches? I assume that we need FFT for being better then the standard algorithms!? There is a faster implementation like http://www.apfloat.org/ in Java which is faster on bigger inputs (it uses FFT), but since it is slower on lower inputs it is slower over all. There is an other package LargeInteger, but i did not tried this. Is there a fast java package for big integers with fast remainder algorithms? Which are the fastest algorithms for remainder? For a good remainder tree algorithm we have to have a good multiplication algorithm (which is needed for division as well) for every size of inputs, so this package has to have karatsuba, tom-3 and fft in place, or? All fast division algorithms rely on fast multiplication algorithms, or? @ jasonp My results are based on an 154 bit Input, so it makes me wonder why remainder trees should be faster for this inputs. Last fiddled with by ThiloHarich on 2009-03-30 at 08:50 |
|
|
|
|
|
#6 |
|
Tribal Bullet
Oct 2004
1101110101112 Posts |
Probably the best you can do for integer remainder involves precomputing an inverse using Newton iteration, then performing division by multiplying by that inverse, then computing the remainder by subtracting the (approximate) quotient times the divisor. The nice part of this method is that you only need fast multiplication and don't need fast division, so that asymptotically the integer remainder has latency equal to two multiple-precision multiplies.
Using a remainder tree is most beneficial when other one-input-at-a-time factoring methods become numerous enough and/or slow enough that it's better to treat a large number of inputs at once. In my mind that means remainder trees become desirable not for trial division by factor base primes but in conjunction with other factoring methods for determining the large primes in relations. It isn't really needed when relations have at most two large primes, but once you get to three large primes and up the savings is considerable. Presumably the same would apply with an enormous factor base. |
|
|
|
|
|
#7 |
|
Nov 2005
101 Posts |
@ jasonp Yes the Newton iteration was also my first thought. There is a divide an conquer approach see http://www.treskal.com/kalle/exjobb/original-report.pdf. I already have implemented a rather fast karatsuba multiplying algorithm, but hesitate to implement an FFT.
I put as many candidates (to be smooth) in the remainder tree, so that the product of them is a little bit smaller the product of the factor base. So the remainder tree works best. The additional large prime(s) will be delivered after the gcd step at the end. I do not use the large prime variant in these tests. |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Erroneous values of s_n? | CuriousKit | Math | 15 | 2016-01-31 11:57 |
| reserving a few k values | Trilo | Riesel Prime Search | 7 | 2015-09-27 23:20 |
| B1/B2 values | esakertt | Information & Answers | 2 | 2012-11-14 13:11 |
| 98.00M to 98.05M Excluded Values | storm5510 | Lone Mersenne Hunters | 45 | 2009-11-13 19:35 |
| LLR test starting values | kuratkull | Information & Answers | 6 | 2008-03-05 13:02 |