![]() |
|
|
#1596 |
|
"Victor de Hollander"
Aug 2011
the Netherlands
23×3×72 Posts |
Not sure if this is the right place to ask....
Just wondering, is there a "rule of thumb" or "best practice" between the number of unique relations/ideals and building a matrix with a xxx target_density? I understand that filtering with a higher target_density results in a smaller matrix, but at some point the LA runtime wouldn't decease that much, as it also correlates with the weight/density, right??? Linear algebra runtime ~ (matrix dimension) x (# matrix non-zeros) For now I'm trying to build matrices with the highest target_density by trial-and-error, as filtering takes only 1-3 hours on a single core. But that is of course a very crude way of doing it. Below some of my latest filtering/matrix building attempts: -I included raw relations, even though it's not a very useful benchmark -Unique relations after duplicate removal L1282 (SNFS 268) Code:
Raw relations: 224,919,395 Unique relations: 167,509,444 Ideals with weight <= 200: 87,229,600 target_density=120 failed to build matrix target_density=100 20.4M matrix no target_density (standard=70??) 23.2M matrix Code:
Raw relations: 265,317,897 Unique relations: 192,368,008 Ideals with weight <= 200: 108,359,144 target_density=128 16.5M matrix GC_3_503 (SNFS 245) Code:
Raw relations: 235,480,826 Unique relations: 190,547,282 Ideals with weight <= 200: 97,276,483 target_density=128 failed to build matrix target_density=120 9.4M matrix Code:
Raw relations: 220,341,209 Unique relations: 180,049,741 Ideals with weight <= 200: 89,318,611 target_density=120 failed to build matrix target_density=110 9.6M matrix |
|
|
|
|
|
#1597 |
|
Tribal Bullet
Oct 2004
DD716 Posts |
The expression for the runtime as the product of matrix size and matrix nonzeros is only a crude approximation. Reducing the matrix size by a little has a much larger effect than reducing the nonzeros a little, because reducing the size removes iterations from the LA, which have more work than just a sparse matrix multiply. My experience is that anytime you can reduce the matrix size, then you should.
Unfortunately, this advice conflicts with the other view: oversieving is extremely expensive in machine time compared to the savings in LA time. If you are trying to conserve machine time you should stop the moment you get any matrix at all, since further work will be good for you (the one doing the postprocessing) but bad for the many who are sieving. Of course if the sieving is finished you may as well get a better matrix result out of it. |
|
|
|
|
|
#1598 |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
494710 Posts |
I don't think we should be worried about the CPU sieving time, it is all done by NFS@Home members who only care about the points per wu.
Last fiddled with by pinhodecarlos on 2014-10-02 at 11:25 |
|
|
|
|
|
#1599 | |
|
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
10000101010112 Posts |
Quote:
|
|
|
|
|
|
|
#1600 |
|
"Victor de Hollander"
Aug 2011
the Netherlands
100100110002 Posts |
GC_6_309
Code:
prp58 factor: 6085866818157942100008449141585506076762716548196246692287 prp184 factor: 4076658755303056396210129927603505052012355288259007443085963314474222075742657803504633513372070999898378938575436414320400308539019619995854594853834360336737173475116605520564456757 Last fiddled with by VictordeHolland on 2014-10-04 at 17:19 Reason: Log attached |
|
|
|
|
|
#1601 |
|
I moo ablest echo power!
May 2013
13×137 Posts |
GC_6_308 factors as:
Code:
prp105 factor: 233763242926795770770783011051636966711332303869203585993451740912983816057928344919651121297303116373491 prp130 factor: 1364030747215321229898984874515083799511267449325477119420067478184320684203367470825269420748854441759992126855759167289283869909 |
|
|
|
|
|
#1602 |
|
I moo ablest echo power!
May 2013
6F516 Posts |
I'll take GC_8_266.
|
|
|
|
|
|
#1603 |
|
I moo ablest echo power!
May 2013
13×137 Posts |
Expected time is ~115 hours.
|
|
|
|
|
|
#1604 |
|
Jun 2012
309110 Posts |
Expected time of GC_6_316 is ~285 hrs from now. Ouch.
Had a terrible time building the matrix - with target_density=128, Msieve hard froze during "dual merge" towards the end of filtering. Not sure if the big data bug reared its head or (more likely) my hardware is wearing out. After powering down, letting things cool and the rerunning with target_density=112, it did manage to build a matrix. FWIW. |
|
|
|
|
|
#1605 |
|
"Victor de Hollander"
Aug 2011
the Netherlands
117610 Posts |
L1286 is ready for post-processing.
Is Thomas already working on it? Or shall I take it? |
|
|
|
|
|
#1606 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
23×11×73 Posts |
Code:
Thu Oct 9 23:38:55 2014 sqrtTime: 4360 Thu Oct 9 23:38:55 2014 prp71 factor: 35438234040371523523471947099569016418489794607241650154704767635389631 Thu Oct 9 23:38:55 2014 prp153 factor: 110586988423059009749043846298072667904110058858771661333548101536298665985514603096106993556742826019147791814742661633783755250646936974958002275386837 Log at http://pastebin.com/DsFJFtPc |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Boinc Statistics for NFS@Home borked ? | thomasn | NFS@Home | 1 | 2013-10-02 15:31 |
| BOINC NFS sieving - RSALS | debrouxl | NFS@Home | 621 | 2012-12-14 23:44 |
| BOINC? | masser | Sierpinski/Riesel Base 5 | 1 | 2009-02-09 01:10 |
| BOINC? | KEP | Twin Prime Search | 212 | 2007-04-25 10:29 |
| BOINC | bebarce | Software | 3 | 2005-12-15 18:35 |