![]() |
Question: relations and target_density
Not sure if this is the right place to ask....
Just wondering, is there a "rule of thumb" or "best practice" between the number of unique relations/ideals and building a matrix with a xxx target_density? I understand that filtering with a higher target_density results in a smaller matrix, but at some point the LA runtime wouldn't decease that much, as it also correlates with the weight/density, right??? Linear algebra runtime ~ (matrix dimension) x (# matrix non-zeros) For now I'm trying to build matrices with the highest target_density by trial-and-error, as filtering takes only 1-3 hours on a single core. But that is of course a very crude way of doing it. Below some of my latest filtering/matrix building attempts: -I included raw relations, even though it's not a very useful benchmark -Unique relations after duplicate removal [B]L1282[/B] [B](SNFS 268)[/B] [code] Raw relations: 224,919,395 Unique relations: 167,509,444 Ideals with weight <= 200: 87,229,600 target_density=120 failed to build matrix target_density=100 20.4M matrix no target_density (standard=70??) 23.2M matrix [/code][B] L1282 [/B][B][B](SNFS 268) [/B]with extra relations:[/B] [code] Raw relations: 265,317,897 Unique relations: 192,368,008 Ideals with weight <= 200: 108,359,144 target_density=128 16.5M matrix [/code][B] GC_3_503 (SNFS 245)[/B] [code] Raw relations: 235,480,826 Unique relations: 190,547,282 Ideals with weight <= 200: 97,276,483 target_density=128 failed to build matrix target_density=120 9.4M matrix [/code][B]GC_6_309 (SNFS 245) [/B][code] Raw relations: 220,341,209 Unique relations: 180,049,741 Ideals with weight <= 200: 89,318,611 target_density=120 failed to build matrix target_density=110 9.6M matrix [/code]So am I doing the right thing here? |
The expression for the runtime as the product of matrix size and matrix nonzeros is only a crude approximation. Reducing the matrix size by a little has a much larger effect than reducing the nonzeros a little, because reducing the size removes iterations from the LA, which have more work than just a sparse matrix multiply. My experience is that anytime you can reduce the matrix size, then you should.
Unfortunately, this advice conflicts with the other view: oversieving is extremely expensive in machine time compared to the savings in LA time. If you are trying to conserve machine time you should stop the moment you get any matrix at all, since further work will be good for you (the one doing the postprocessing) but bad for the many who are sieving. Of course if the sieving is finished you may as well get a better matrix result out of it. |
I don't think we should be worried about the CPU sieving time, it is all done by NFS@Home members who only care about the points per wu.
|
[QUOTE=jasonp;384207]Unfortunately, this advice conflicts with the other view: oversieving is extremely expensive in machine time compared to the savings in LA time. If you are trying to conserve machine time you should stop the moment you get any matrix at all, since further work will be good for you (the one doing the postprocessing) but bad for the many who are sieving. Of course if the sieving is finished you may as well get a better matrix result out of it.[/QUOTE]
Post-processing requires far more human time than sieving does (per unit of machine time), and requires more resources on the machine doing it. It makes sense to me that we should oversieve a bit in a scenario like NFS@Home where there are many sievers and few post-processors, regardless of credit. |
GC_6_309
1 Attachment(s)
[B]GC_6_309[/B]
[code] prp58 factor: 6085866818157942100008449141585506076762716548196246692287 prp184 factor: 4076658755303056396210129927603505052012355288259007443085963314474222075742657803504633513372070999898378938575436414320400308539019619995854594853834360336737173475116605520564456757[/code]53h for a 9.6M matrix with -t 4 on a 3770k |
GC_6_308
GC_6_308 factors as:
[CODE]prp105 factor: 233763242926795770770783011051636966711332303869203585993451740912983816057928344919651121297303116373491 prp130 factor: 1364030747215321229898984874515083799511267449325477119420067478184320684203367470825269420748854441759992126855759167289283869909[/CODE] Took 136.5 hours with target_density=100 and 4 threads on the i7-2630QM. |
I'll take GC_8_266.
|
Expected time is ~115 hours.
|
Expected time of GC_6_316 is ~285 hrs from now. Ouch.
Had a terrible time building the matrix - with target_density=128, Msieve hard froze during "dual merge" towards the end of filtering. Not sure if the big data bug reared its head or (more likely) my hardware is wearing out. After powering down, letting things cool and the rerunning with target_density=112, it did manage to build a matrix. FWIW. |
L1286 reserved or not?
L1286 is ready for post-processing.
Is Thomas already working on it? Or shall I take it? |
GC_3_511 done
[code]
Thu Oct 9 23:38:55 2014 sqrtTime: 4360 Thu Oct 9 23:38:55 2014 prp71 factor: 35438234040371523523471947099569016418489794607241650154704767635389631 Thu Oct 9 23:38:55 2014 prp153 factor: 110586988423059009749043846298072667904110058858771661333548101536298665985514603096106993556742826019147791814742661633783755250646936974958002275386837 [/code] 198.3 hours for 13.9M matrix on i7/2600 -t3 Log at [url]http://pastebin.com/DsFJFtPc[/url] |
| All times are UTC. The time now is 22:40. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.