![]() |
|
|
#1 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
191316 Posts |
The weight target used when doing the clique removal pass is a function of the amount of RAM available; running with the same target density on a 64GB machine and a 32GB machine gave me
64GB Code:
Sun Sep 25 03:48:18 2016 commencing singleton removal, initial pass Sun Sep 25 07:48:11 2016 memory use: 11024.0 MB Sun Sep 25 07:48:15 2016 reading all ideals from disk Sun Sep 25 07:50:46 2016 memory use: 12693.0 MB Sun Sep 25 08:02:33 2016 commencing in-memory singleton removal Sun Sep 25 08:08:32 2016 begin with 742388473 relations and 598583048 unique ideals Sun Sep 25 09:36:22 2016 reduce to 506671424 relations and 341116535 ideals in 15 passes Sun Sep 25 09:36:22 2016 max relations containing the same ideal: 37 Sun Sep 25 09:39:00 2016 reading ideals above 720000 Sun Sep 25 09:39:01 2016 commencing singleton removal, initial pass Sun Sep 25 12:52:37 2016 memory use: 11024.0 MB Sun Sep 25 12:52:39 2016 reading all ideals from disk Sun Sep 25 12:59:10 2016 memory use: 22311.2 MB Sun Sep 25 13:27:01 2016 keeping 414655273 ideals with weight <= 200, target excess is 2608246 Sun Sep 25 13:45:24 2016 commencing in-memory singleton removal Sun Sep 25 13:48:07 2016 begin with 506671424 relations and 414655273 unique ideals Sun Sep 25 14:23:57 2016 reduce to 506628374 relations and 414612223 ideals in 10 passes Sun Sep 25 14:23:57 2016 max relations containing the same ideal: 200 Sun Sep 25 14:39:19 2016 removing 14788264 relations and 12866356 ideals in 2000000 cliques Code:
Thu Sep 29 10:09:57 2016 commencing singleton removal, initial pass Thu Sep 29 11:27:53 2016 memory use: 11024.0 MB Thu Sep 29 11:27:53 2016 reading all ideals from disk Thu Sep 29 11:28:17 2016 memory use: 12693.0 MB Thu Sep 29 11:29:18 2016 commencing in-memory singleton removal Thu Sep 29 11:30:14 2016 begin with 742388473 relations and 598583048 unique ideals Thu Sep 29 11:39:51 2016 reduce to 506671424 relations and 341116535 ideals in 15 passes Thu Sep 29 11:39:51 2016 max relations containing the same ideal: 37 Thu Sep 29 11:40:34 2016 reading ideals above 720000 Thu Sep 29 11:40:35 2016 commencing singleton removal, initial pass Thu Sep 29 12:54:52 2016 memory use: 11024.0 MB Thu Sep 29 12:54:52 2016 reading large ideals from disk Thu Sep 29 12:59:17 2016 keeping 376967749 ideals with weight <= 20, target excess is 40295770 Thu Sep 29 13:06:02 2016 memory use: 14097.3 MB Thu Sep 29 13:06:02 2016 commencing in-memory singleton removal Now I've read the code, I can try to work round this by Code:
screen /home/nfsworld/msieve-svn/trunk/msieve -v -nc1 "filter_mem_mb=65536 target_density=144" (why not just use the 64G machine for everything? because it's the quad-Opteron I talked about elsewhere, and for the reasons discussed in that thread I didn't want to be using 22GB of its memory while timing linear algebra. Also the 32G machine has fast SSD and can do singleton removal in 74 rather than 193 minutes. I'm rerunning the filtering on the 64G machine) Last fiddled with by fivemack on 2016-09-30 at 09:26 |
|
|
|
|
|
#2 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
641910 Posts |
More interesting question: where does the default max ideal weight limit of 200 come from? Does it have to fit in a single byte for data structure reasons? Since I got a large slow matrix when it got set to 20, should I expect a smaller or at least a faster matrix if I set it to 250?
|
|
|
|
|
|
#3 |
|
Tribal Bullet
Oct 2004
3,541 Posts |
Just seeing this. The ram_size/2 figure is to leave half your memory for not-filtering-tasks; nothing in the code really becomes larger or smaller based on the amount of RAM you declare to be usable, it's just compared to the size of one of the intermediate savefiles generated by the singleton removal, under the assumption that that's how much memory the clique removal and full merge will need.
The 200 figure was chosen to limit the explosion of memory use by including ideals in the filtering that occur very often. Filtering would probably ignore those ideals anyway so they just chew up space. You can get a reasonable but probably-too-light matrix with a target of 40. |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| NFS filtering in the era of big data | jasonp | Msieve | 36 | 2018-05-07 19:55 |
| Sieving with powers of small primes in the Small Prime variation of the Quadratic Sieve | mickfrancis | Factoring | 2 | 2016-05-06 08:13 |
| The big filtering bug strikes again (I think) | Dubslow | Msieve | 20 | 2016-02-05 14:00 |
| Filtering | Sleepy | Msieve | 25 | 2011-08-04 15:05 |
| Filtering | R.D. Silverman | Cunningham Tables | 14 | 2010-08-05 08:30 |