![]() |
14e and 15e are at present both sieving well ahead of the available linear algebra resources; I should probably move some of my resources into tidying up, but with 128 threads sieving locally I get quite a lot of linear algebra demand before I even look at nfs@home.
|
C187_142_59 done (queued 26/Sep/2016)
1 Attachment(s)
[code]
Sat Nov 26 05:02:20 2016 p61 factor: 2150486893617994651425932611447525781809518764361672956127463 Sat Nov 26 05:02:20 2016 p127 factor: 1510439317278894815928048894035007820156795646375424459362259062827696380521191725561534220149929988922800004670897132556417697 [/code] 168.8 hours for 17.17M density-140 matrix on 7 threads E5-2650v3. Log attached and at [url]http://pastebin.com/jrzRN6e4[/url] (there were 2479 erroneous lines in the relation file, and I ran filtering three times, which is why the log is so long as to require gzipping) |
Reserving C249_128_105. ETA morning 7 December
|
[QUOTE=swellman;447959]Reserving C195_134_124 (14e). Finally managed to get a 32-bit job into LA!
ETA is 268 hours, so ~9 Dec.[/QUOTE] Which td did you use (share nb of unique relations) and how much memory LA is using? |
[QUOTE=pinhodecarlos;447997]Which td did you use (share nb of unique relations) and how much memory LA is using?[/QUOTE]
I used TD=130, 395M unique relations (482M relations at start). Using -t 6 on an older i7. 10.4 Gb used by LA according to windoze. |
[QUOTE=pinhodecarlos;447957]In my case managed to build the matrix with 320M and target density set to 130 but got that error after LA completion, square root stalled.[/QUOTE]
Okay, so it sounds to me like the problem was still that there were too [I]many[/I] relations, not too few. Serge should probably be made aware of this discussion, since it seems that always prescribing more sieving when this error occurs can be counterproductive. And BTW, I apologize for so unceremoniously stealing this number from you. You had asked for more sieving, and before adding more work I wanted to test my theory that it didn't need that. Of course, the only way I could think of to test it was to actually try doing the post-processing myself. And getting an answer meant basically completing the job. |
No worries Jon.
|
[QUOTE=swellman;447960]I don't know if there is any data on this issue. The main reason I proposed 32-bit jobs was to feed the hungry grid. Not a great reason but when test sieving showed a reasonable yield on 14e/32 for a given poly, nominating it for 14e seemed a better option than just parking it waiting for the 15e queue to decrease, especially when 14e was going dry. But you're right about the 32-bit jobs backing up in postprocessing, so I've abandoned the practice. Sorry if my good intentions led to a bad place.
On an up note, I've managed to start postprocessing 32-bit jobs again on 14e. So I'm hoping to help cleanup the backlog I inadvertently created.[/QUOTE] Yes, it's hard to know the right thing to do here. My gut feeling is that between all the different queues, we should not intentionally use parameters which end up requiring more processor time than could be achieved with better parameter selection. So I'd rather avoid taking a number which could be most efficiently handled on the 15e queue and putting it on the 14e queue just to keep that one from running out. Now I don't want to sound too doctrinaire about this. The history of NFS has many examples of doing something that's less than optimal as a concession to practicality (e.g. using small factor bases and oversieving in order to make the linear algebra tractable). And it would certainly be nice if we could keep the 14e queue from running out. But it would be nice if we could accomplish that without resorting to jobs that cut down the pool of people willing to do the post-processing. I think there's still a sweet spot for 14e, jobs that are too hard for most people to do on personal hardware but still are optimal with the 14e siever. We just don't seem to be able to get them pre-tested with ECM fast enough. And along those lines, perhaps it's not that important to do the full ECM testing for these numbers before they get queued (where "full" means whatever guideline is most in vogue (2/9, or something else)). After all, too-few ECM curves is just another form of suboptimal resource usage, akin to using the wrong siever, and if something has to give, then it's not clear that's such a bad one. (FWIW, I haven't been post-processing lately because my cores are all tied up with ECM, getting some HCN numbers ready for the queue.) In any case, I want to make sure you don't interpret my previous musings as any kind of rebuke. We're all sort of feeling our way to the best practices here, and many on this forum have a lot more experience with this than I do, you included. |
Taking C257_128_111 (large matrix, ETA 12 December)
|
Will try next C194_142_70.
|
[QUOTE=jyb;448018]Yes, it's hard to know the right thing to do here. My gut feeling is that between all the different queues, we should not intentionally use parameters which end up requiring more processor time than could be achieved with better parameter selection. So I'd rather avoid taking a number which could be most efficiently handled on the 15e queue and putting it on the 14e queue just to keep that one from running out.
Now I don't want to sound too doctrinaire about this. The history of NFS has many examples of doing something that's less than optimal as a concession to practicality (e.g. using small factor bases and oversieving in order to make the linear algebra tractable). And it would certainly be nice if we could keep the 14e queue from running out. But it would be nice if we could accomplish that without resorting to jobs that cut down the pool of people willing to do the post-processing. I think there's still a sweet spot for 14e, jobs that are too hard for most people to do on personal hardware but still are optimal with the 14e siever. We just don't seem to be able to get them pre-tested with ECM fast enough. And along those lines, perhaps it's not that important to do the full ECM testing for these numbers before they get queued (where "full" means whatever guideline is most in vogue (2/9, or something else)). After all, too-few ECM curves is just another form of suboptimal resource usage, akin to using the wrong siever, and if something has to give, then it's not clear that's such a bad one. (FWIW, I haven't been post-processing lately because my cores are all tied up with ECM, getting some HCN numbers ready for the queue.) In any case, I want to make sure you don't interpret my previous musings as any kind of rebuke. We're all sort of feeling our way to the best practices here, and many on this forum have a lot more experience with this than I do, you included.[/QUOTE] Your comments are spot on and well mannered. I've taken no offense so no worries there. Yes, as to 14/32 I'm now in the 'don't do it' camp. Feeding the grid is not sufficient reason to burden the postprocessing folks, not to mention forcing the sievers to do non optimum tasks. And feeding the masses does not seem to guarantee them staying around - the throughput of NFS@Home has steadily dropped for the last few days, and most of the sieving resources have shifted to 16e (though maybe this is Greg shifting things around behind the scenes). This despite sufficient queue length to keep the grid fed. Maybe there's another challenge somewhere else. Regardless I've spent the last month optimizing my polys/siever choices and I will stick with them. And ECM is still a big part of the process - I've got lots of partial t60 work ahead of me. Happy factoring! |
| All times are UTC. The time now is 23:14. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.