![]() |
|
|
#34 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
72×131 Posts |
Thanks to the enormous amount of sieving that's been possible by having contributors in three continents, the matrix is of an almost convenient size - no bigger than the matrices that NFSnet have had to deal with for SNFS on numbers of 800 bits.
Code:
matrix is 9331810 x 9331986 (2483.5 MB) with weight 829935487 (88.93/col) sparse part has weight 539050715 (57.76/col) saving the first 48 matrix rows for later matrix is 9331762 x 9331986 (2401.9 MB) with weight 627984794 (67.29/col) sparse part has weight 536323527 (57.47/col) matrix includes 64 packed rows using block size 65536 for processor cache size 4096 kB commencing Lanczos iteration (4 threads) memory use: 2554.7 MB linear algebra completed 508 out of 9331986 dimensions (0.0%) Code:
22385 nfsslave 18 0 2959m 2.9g 464 R 234 75.3 42:16.29 msieve Last fiddled with by fivemack on 2008-01-19 at 18:12 |
|
|
|
|
|
#35 | |
|
Oct 2004
Austria
2×17×73 Posts |
Quote:
1.) How many unique relations did you get? 2.) Is it oversieved (if yes, how much?), or do we have just enough relations? |
|
|
|
|
|
|
#36 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
72×131 Posts |
Code:
Fri Jan 18 20:35:52 2008 found 68941674 hash collisions in 227965000 relations Fri Jan 18 20:35:52 2008 commencing duplicate removal, pass 2 Fri Jan 18 21:01:12 2008 found 43495997 duplicates and 184469003 unique relations I suspect this was substantially oversieved, but since the compute power available for sieving is significantly greater than that available for linear algebra, oversieving is the way to go - you can actually save a day of real-time at the matrix step by spending a day of real-time sieving. |
|
|
|
|
|
#37 | |
|
Tribal Bullet
Oct 2004
3,541 Posts |
Quote:
I think the code would be able to get a better initial bound estimate by computing a histogram of all the primes in relations, and choosing the bound to be the middle of the first bin at which the number of primes per relation drops below X, so that filtering will start at the point where the dataset is sparsely populated with large primes. Last fiddled with by jasonp on 2008-01-20 at 06:52 |
|
|
|
|
|
|
#38 | |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
10,753 Posts |
Quote:
Paul |
|
|
|
|
|
|
#39 |
|
Tribal Bullet
Oct 2004
3,541 Posts |
Tom, could you try replacing gnfs/filter/duplicate.c with the attached, then rerunning just the duplicate removal and the beginning of the singleton removal? I've made the changes above, and it produces a better initial filtering bound in a few local tests.
|
|
|
|
|
|
#40 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
72·131 Posts |
It is with mild vexation that I must report that the first matrix job on 6^383+1 has failed - it's got to 104% completeness, after two weeks on a quad-core of known reliability, and I've never observed a successful matrix get to more than 100%.
This is the matrix produced without applying jasonp's patch; I'll do that this evening and set the quad-core running for another two weeks. |
|
|
|
|
|
#41 | |
|
Tribal Bullet
Oct 2004
3,541 Posts |
Quote:
|
|
|
|
|
|
|
#42 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
72·131 Posts |
It is with slightly more than mild vexation that I must report that the second matrix job on 6^383+1 has failed; it completed the Lanczos step without issue, but found only trivial kernel vectors.
I have a backlog of slightly smaller Cunningham numbers to process; the next thing I'll try is processing with a filtering bound equal to the factor-base size. I've enough memory to handle a denser matrix, and my suspicion is that the over-sieving plus the quest for sparseness has led to a matrix where A^T A has too large a kernel. I'll see how this goes by the beginning of March. |
|
|
|
|
|
#43 | |
|
Nov 2003
746010 Posts |
Quote:
I frequently find that the CWI solver fails if the matrix is too sparse. |
|
|
|
|
|
|
#44 | |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
10,753 Posts |
Quote:
I try to aim for a small dense matrix, only slightly over-square. The CWI filter code makes it very hard to go over-dense but very easy to make it over-sparse. Paul |
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| 6^383+1 by GNFS (polynomial search; now complete) | fivemack | Factoring | 20 | 2007-12-26 10:36 |
| f14 complete | masser | Sierpinski/Riesel Base 5 | 2 | 2006-04-23 16:05 |
| Complete Factorization??? | Khemikal796 | Factoring | 13 | 2005-04-15 15:21 |
| Factoring -1.#J% complete | Peter Nelson | Software | 4 | 2005-04-06 00:17 |
| 61.5 thru 62m complete to 2^60 | nitro | Lone Mersenne Hunters | 0 | 2003-12-07 13:50 |