![]() |
Sieving Threshold
So far in my program the following things work: Generating a factor base, finding quadratic residues, data collection, solving congruences, building the matrix and performing gaussian elimination on the matrix.
However the sieving step is what is tripping me up. I've tried using an array with each element initialised to 0, solving the congruences and adding the number of bits in [I]p[/I] to the array, this is repeated for each [I]p[/I] in the factor base. Then any element in the array over the threshold given by the equation below is tested for smoothness. Let bit([I]n[/I]) be the function that returns the number of bits in the integer [I]n, [/I]and M be the number of X/Y pairs collected. (Where [TEX]Y = (X+\left \lceil \sqrt{n} \right \rceil)^{2} - n[/TEX]) [TEX]Threshold = \frac{bit(n)}{2}+ bit(\frac{M}{2})-T*bit(pmax)[/TEX] Where T is some constant, and [I]pmax[/I] is the largest factor in the factor base. I've also tried setting the values in an array to the number of bits in Y, then subtracting bit([I]p[/I]) from each value, then any value below the threshold (calculated as above, but with a different value for T) was tested for smoothness. I keep just ending up with a lot of false positives, could it be possible that my program is working correctly, I'm just choosing incorrect values for T? Or is the method flawed as well? Thank You :) |
[QUOTE=Sam Kennedy;318652]
I keep just ending up with a lot of false positives, could it be possible that my program is working correctly, I'm just choosing incorrect values for T? Or is the method flawed as well? Thank You :)[/QUOTE] Hard to say. What values for T, and more importantly, for threshold are chosen? If you directly change threshold (not T), starting very high and systematically reducing it (assuming you are adding log p to 0 here), does the relation rate increase? If T and threshold are approximately correct, the first place to look elsewhere would be the congruences. If you don't start sieving in the correct location for each prime then none of the hits will be true hits (i.e., lots of false positives). |
I checked the congruences by working through the program step by step and keeping track of the values by hand. I also update the congruences, for example, if you start on element 3 in an array, incrementing by 17, with an array of size 250, after you have ran off the end of the array, the next start position should be on element 8 in the array after everything has been recalculated.
I don't add log(p) since the big integer library I'm using doesn't support any logarithm functions. I'm not sure whether the values have to be < or > than the threshold value. I'll make my program more verbose and see what it spits out. |
[QUOTE=Sam Kennedy;318656]
I don't add log(p) since the big integer library I'm using doesn't support any logarithm functions. I'm not sure whether the values have to be < or > than the threshold value. I'll make my program more verbose and see what it spits out.[/QUOTE] Sorry, log base 2 (i.e., number of bits). If you add log2(p) to 0 then the values should be bigger than the threshold... conversely if you subtract log2(p) from some starting threshold then the values should be less than 0. It makes no difference which way, although there are advantages to subtracting if you think about what "less than 0" means to an unsigned integer. |
Here is an example:
[code]n = 2596148429267540652519253520417593 b1 = 35000 T = .5 TARGET = 73 bits THRESHOLD = 65 bits [/code] That is for starting at 0 and adding log[SUB]2[/SUB]([I]p[/I]). There must be >=65 bits before a number is considered for trial division. Is that too high? I'm not sure what % of the target the threshold should be. I've tried experimenting with different values for T, but it doesn't seem to have too much of an impact on the speed at which smooth numbers are found. |
I fixed a small error in my code, everything was working correctly, I was just sieving over a few thousand values instead of 250,000.
Now it works fine :) I guess now my next step would to be the MPQS? |
Good job! Go for it. There are also other improvements you could try like the large prime variation.
|
[QUOTE=Sam Kennedy;318723]I fixed a small error in my code, everything was working correctly, I was just sieving over a few thousand values instead of 250,000.
Now it works fine :) I guess now my next step would to be the MPQS?[/QUOTE] MPQS would be one option. Things like multipliers and large primes are other options. I would suggest doing these first as the code is currently simpler. |
I'll look into the large prime variation, I'm currently trying to understand how to generate polynomials.
I've read that the polynomials have to satisfy [TEX]b^{2}-ac = n[/TEX] So for example, if n = 61063, then you could use the polynomial: [TEX]y = (9x^{2}+2*4x-6783) - n[/TEX] where a = 9, b = 4 and c = -6783. And use that instead of [TEX]y = (x + \left \lceil \sqrt{n} \right \rceil)^{2} - n[/TEX] to collect data? |
'Multiple polynomials' is a misnomer; you always sieve the one QS polynomial, but MPQS samples that polynomial at points on an arithmetic progression. The value of the QS polynomial at each point on the progression has a large known factor, leaving an easier problem for the sieving to solve. When the values of the QS polynomial become too large, we switch to another arithmetic progression (which unfortunately is called 'switching polynomials')
|
I can't find much information on using multipliers, all I have managed to find is that you multiply [I]n [/I]by a square free number so that more smooth values can be found. There isn't any information on how to actually select this value though.
|
| All times are UTC. The time now is 15:41. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.