![]() |
[QUOTE=MathBoy;249254]I am still unclear how these logs are faster.
Since for example let V[] be the set that we are going to sieve using the log method. let N = the threshold log value for example lets use log(V[i]) > 5.7 using this we still have to go thru all the elements in V and for the elements > 5.7 they still have to be factored. So where is the time saver? The only time saver I can see is the method used in factoring the elements in V. Maybe I don't use trial but ECM or something different to speed this up... [/QUOTE] There are many time savers. Let's suppose we are factoring N, and let f(x)= x+i^2 mod N, with x=[sqrt(n)] ( the first integer larger than the square root of N) Let suppose to get accumulate enough relations we must consider all residues from i=0 to 2^32. Using your example (assuming our computers capable) we then have v[2^32]. Lets also say suppose factor base contains 2,000 primes just to make things easier Now not using the log method: 1) Before you can even trial divide a v[i], you need to actual evaluate x^2 mod N for each v[]. would require 2^32 squarings and multi-precion divisions to calcuate v[0]... v[2^32] 2) Then we need to trial divide each v by the corresponding primes in our factor base. lets say a prime p divides v[i] n times, then you need to divide v[i] n+1 time, because you need to divide v[i] by p until v[i] mod p <>0. Now using the log method. 1) You need not calculate x^2 mod n for all v. In fact, it will suffice to calculate x^2 mod n ONLY ONCE then taking the MAXLOG log of this value. Eg, MAXLOG= log((sqrt(n)+1)x^2) mod n 2) Now 2^32 v[i], we simply say v[i] += log(p) for each p in our factor base. In practice, to save time we calculate the log of each p only once before we start sieving and say v[i] += logp(Logp.IndexOf(p)). From here we subtract some fudge factor from MAXLOG and check all v[] for v[i] > MAXLOG. Here we find a maybe several thousand v[i] that meet this criteria. We then calculate (x+i)^2 mod n for these I and attempt to trial divide only these values. So by using the log method we exchange the need to calculate on the x+i^2 mod N for 2^32 and the requirement to trial divide everyone of those locations for the ability to perform one the order of 2^32 additions along with only a few thousand evaluations of x+i^2 mod N and trial division of those locations. How much does this save? Say we sieve 2^16 locations per block. Let's say to calcute X^2 Mod N for those 65536 location and trial divide each of those locations takes 2 seconds. To sieve all 2^32 locations would require (2^32/2^16) blocks of a size 2^16. This means sieving would take 2^16*2 seconds = 131072 seconds which would be ~ 36.4 hours. Now using the log methods (using subtraction instead of addition) we just set those 65536 locations to MAX LOG and subtract log(p) from each location divisible by P and then only trial divide v[i] < some Fudge factor. This may take .0002 seconds (without further trial dividing the v[i] on a slow computer which means the sieving would take 13 seconds or so for all 2^32 locations. Granted we still need to trial divide those locations where v[i] > Fudge factor. Since our facto base contains 2,000 primes we may likely only need to trial divide 65,536 location to actually find 2,000 relations smooth to our factor base giving us a final run time of 13 seconds for sieving and 2 seconds for trial division. 15 secs for sieving and trial division using logs vs 36 hours for sieving and trial division not using logs. |
| All times are UTC. The time now is 05:34. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.