![]() |
|
|
#12 | |
|
Nov 2003
22·5·373 Posts |
Quote:
|
|
|
|
|
|
|
#13 | |
|
Jul 2003
So Cal
2,111 Posts |
Quote:
The relations found are attached in a format hopefully close enough to CWI to be clear.
|
|
|
|
|
|
|
#14 | |
|
Nov 2003
22×5×373 Posts |
Quote:
Thanks for the relations. I have found out why my yield was so low. It was not a code bug, but rather a parameter selection related to setting the sieve threshold. I changed the parameters and am now getting close to the yield you reported earlier. "close" means about 7.7 relations per q for 8k x 16K, rather than the 8.5 you reported. However, the difference does have me concerned. I played around with varying the sieve threshold, and this was the best yield I could get. There may still be a bug in the siever; from my prior experience with debugging sievers, this can be the worst kind of bug to track down: not finding relations that it should. Having your data makes debugging easier. I can determine what relations were missed by my code, and instrument the code to look at the (c,d) locations that were missed. This sort of thing is time consuming..... Other possibilities: The lattice reduction can be done in a variety of ways. It may be the case that slightly better reductions could be obtained in my code by a slightly different algorithm. If indeed there is a bug, I think the most likely place to look is in the computation of the sieve boundaries for each ideal. In Pollard-style lattices, we do a reduction on each ideal. Call it v1, v2. (as vectors). We sieve over a sub-lattice. Let [0,N], [-M, M] be the sieve boundaries. We sieve over a region (e,f) such that the following inequalities hold: (0, -M) < e * v1 + f * v2 < (N, M) The region (e,f) for which this holds is an affine transform of the rectangular region that is 2M x N. This transform produces a (skewed) parallelogram. We need to find all (e,f) values inside this parallelogram. Then we have (c,d) = e*v1 + f*v2. For speed reasons, I do a good but not guaranteed perfect approximation to this region. Thus, some lattice points might get missed. The primary thing that makes GGNFS faster is that it uses a different approach to finding the lattice points to hit: It does not compute this skewed region, nor does it do a full lattice reduction on all of the ideals. I also suspect that it has better memory management. My siever is not very memory efficient. Comments/discussion is solicited. |
|
|
|
|
|
|
#15 | |
|
Nov 2003
22×5×373 Posts |
Quote:
in a different form. Suppose LPR is the rational side large prime bound. Then any cofactor that is bigger than LPR^2 will be useless. However, any cofactor that is very near to LPR^2 is unlikely to factor into two primes less than LPR. It will most likely factor into something smaller than LPR and something bigger than LPR. Thus, we might want to give a hint to the code saying, in effect, we will accept LP's up to 'LPR', but don't bother trying to split anything bigger than some chosen bound which might be selected somewhat smaller than LPR^2. In GGNFS, this bound appears to be 2^lambda. Comments solicited. |
|
|
|
|
|
|
#16 | |
|
Nov 2003
22×5×373 Posts |
Quote:
be equivalent to somewhere around a C255 to C260 sextic. 270 "seems" to be high. But I could be very wrong. |
|
|
|
|
|
|
#17 | |
|
Nov 2003
22·5·373 Posts |
Quote:
|
|
|
|
|
|
|
#18 | |
|
May 2008
21078 Posts |
Quote:
The way I understand it (which may well be wrong) is that, since the sieve values are only represented in bytes, we won't know exactly how big the cofactor is until the trial division is actually done. The lambda specifies the threshold for which trial division will be used on the sieve values. The larger the lambda, the more values will be trial-divided, but more of those values will be false hits as well. Experiment shows that a good lambda is slightly larger than log(2^mfbX)/log(Xlim) where mfbX is the cofactor bound in bits and Xlim is the small prime bound. Specifying lambda to be too much larger than this causes too much time to be wasted on false hits, whereas a lambda which is smaller than this throws away too many good relations. I hope my understanding is accurate. Last fiddled with by jrk on 2010-10-20 at 00:50 Reason: typos |
|
|
|
|
|
|
#19 | ||
|
"Ben"
Feb 2007
22×3×293 Posts |
Quote:
From the file INSTALL.and.USE Quote:
|
||
|
|
|
|
|
#20 |
|
Nov 2003
746010 Posts |
|
|
|
|
|
|
#21 | |
|
Noodles
"Mr. Tuch"
Dec 2007
Chennai, India
3×419 Posts |
Quote:
I sieved upon that rational side; final matrix had got 9.66 million rows, sieved up entirely with gnfs-lasieve4I15e lattice siever. Ok, what happened to that number? 2,881+ it is rather made available again? ![]() I would rather like to see up with that number to finish off sometime sooner itself. |
|
|
|
|
|
|
#22 | |
|
Nov 2003
22·5·373 Posts |
Quote:
sieving one ideal with norm p. The hits that p takes will be all over the (c,d) plane. How can one restrict to some subset of the plane? I assume that as p is sieving, a bucket is filled with the (c,d) values, along with log(p). When a bucket gets full (whatever its size is), you sort on the values of c as primary index, followed by a sort on d. You can then dump the bucket into the sieve array by going accross a row, in order of increasing d. An individual bucket is also kept small enough so that it fits in cache while it is being filled. My buckets use shorts for (c,d) and a single byte for log(p). One row of the sieve array is around 8K in length and thus fits in the L2 cache. Thus, emptying the bucket remains in cache. |
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Bad LL-D Success Rate | TheMawn | Data | 14 | 2014-10-13 20:19 |
| SNFS polynomial with very low yield | ryanp | Factoring | 9 | 2013-04-03 06:33 |
| Distributed finishing for 2,1870L | R.D. Silverman | Cunningham Tables | 64 | 2011-02-27 21:39 |
| ggnfs sieving yield wierdness | jrk | Factoring | 10 | 2009-05-07 17:41 |
| EFF prize and error rate | S485122 | PrimeNet | 15 | 2009-01-16 11:27 |