![]() |
[QUOTE=bsquared;130302]See post #8 in this thread:
[URL]http://www.mersenneforum.org/showthread.php?t=10003[/URL] [/QUOTE] That's just what I was looking for. Thanks for that. |
So, I'm trying "-f 50000000", and it says "Special q lower bound 50000000 below FB bound 8e+007".
So, my guess is to set either alim or rlim less than 80000000. But which one, or both? And to what lower value should I set it? Should I change any other values? I'm sorry for the amateur questions, but I'd like to get this right. |
[QUOTE=WraithX;130312]So, I'm trying "-f 50000000", and it says "Special q lower bound 50000000 below FB bound 8e+007".
So, my guess is to set either alim or rlim less than 80000000. But which one, or both? And to what lower value should I set it? Should I change any other values? I'm sorry for the amateur questions, but I'd like to get this right.[/QUOTE] Set alim to 50000000. You can keep the other values as they are. |
[QUOTE=bdodson;130213] ... Wish me luck with getting relns here
within the next day or so. I seem to have lasieve running on the Opteron cluster under condor (5hrs of 12hrs or so); except that I won't know whether the output is "tranferred" back from the local node to the condor master until the range finishes (or, otherwise, just vanishes). ... so as I was saying, wish me luck on today's 2+5 cpus. -Bruce[/QUOTE] Tom's doing a great job. Here's [code] -rw-r--r-- 1 bad0 faculty 14593677 Mar 30 08:19 3+512.123.00M-123.05M -rw-r--r-- 1 bad0 faculty 15331669 Mar 30 09:13 3+512.123.05M-123.10M -rw-r--r-- 1 bad0 faculty 15023040 Mar 30 08:45 3+512.123.10M-123.15M -rw-r--r-- 1 bad0 faculty 14775508 Mar 30 08:23 3+512.123.15M-123.20M -rw-r--r-- 1 bad0 faculty 14795865 Mar 30 08:43 3+512.123.20M-123.25M ... wc -l 3+512.123* 130277 3+512.123.00M-123.05M 136873 3+512.123.05M-123.10M 134111 3+512.123.10M-123.15M 131864 3+512.123.15M-123.20M 132083 3+512.123.20M-123.25M 665208 total [/code] which appears to be 665,208 relns from the old quadcores (40*2*4 = 320 cores). The Opterons ran too (2*59 = 118 cores, when they're all up, which is rare); 258K from bsquared's range (my fault). I'll set the rest of 123M, then have a look at Greg's large nfsnet(?) number. On the data, I have lots of space on the Opteron cluster (which also hosts the scheduler for the "old" quads). Good thing, there's no ftp (only sftp); automated ftp seems unlikely. -Bruce |
[quote=bdodson;130384]
258K from bsquared's range (my fault). [/quote] Where exactly did you overlap? It seems you're well outside my range (93 to 100M), unless I'm misunderstanding something. Not a problem if so, and if I'm not already done with the overlapped part then I'll just not do that particular part. I'm done up through 96M as of now. |
[QUOTE=bsquared;130386]Where exactly did you overlap? It seems you're well outside my range (93 to 100M), unless I'm misunderstanding something.
[/QUOTE] I was fairly sure the jobs wouldn't run (this is a condor thing; ask Richard or Greg!). The range was 93.0-93.1. I reserved 123M after seeing your reservation of 93M-100M (nice!). My first jobs were set on the Opterons (without checking or reserving); when I did check, I reserved 123M and set the next quadcore jobs there. I wouldn't have mentioned running them at all, except that I didn't want to leave the impression that only the quadcores were running. So the initial experiment was 2+5 jobs; I'm waiting for 5+10 (amd+xeon_quad) to finish 123M; then will try ... uhm, c.100+300 available, so maybe 20+60 jobs, 1M+3M (but on 3p536)? That sounds like the next sequence term would be 40+120, 2M+6M (these are 12-13 hour runs), along with 20 on the new quads. Hmm, 60+180+20 might be pushing things; maybe 50+150+20, which would double the 25% of Greg's max I was considering before the first jobs ran. -Bruce |
[quote=bdodson;130419]I was fairly sure the jobs wouldn't run (this is a condor thing; ask Richard
or Greg!). The range was 93.0-93.1. I reserved 123M after seeing your reservation of 93M-100M (nice!). My first jobs were set on the Opterons (without checking or reserving); when I did check, I reserved 123M and set the next quadcore jobs there. I wouldn't have mentioned running them at all, except that I didn't want to leave the impression that only the quadcores were running. So the initial experiment was 2+5 jobs; I'm waiting for 5+10 (amd+xeon_quad) to finish 123M; then will try ... uhm, c.100+300 available, so maybe 20+60 jobs, 1M+3M (but on 3p536)? That sounds like the next sequence term would be 40+120, 2M+6M (these are 12-13 hour runs), along with 20 on the new quads. Hmm, 60+180+20 might be pushing things; maybe 50+150+20, which would double the 25% of Greg's max I was considering before the first jobs ran. -Bruce[/quote] Well, I guess we'll have a small amount of overlap, which I don't think is any big deal. I'll submit my files as normal and we'll let msieve sort 'em out :) I know you are very productive with it, but I don't know anything about the condor cluster you mention. Is there a website I can go to to read more about it? I'm running my ranges on a small cluster of dual dual core xeon 5160's and dual quad core xeon X5365's, available (nights and weekends, mostly) due to the generosity of my employers. Running part time I can do about 1M special-q a day on this number. I'm working up the nerve to ask them about longer term usage of a single node... I'm thinking that a 16Gb, dual quad core X5365 would work nicely for post-processing. But I'm doubtful I'll be able to monopolize one box for so long :( - ben. |
Explanation of very long reservation
I realize that I've made an enormous reservation, about 20 CPU-weeks of work; the reason is that I'm going off for a two-week vacation in a couple of weeks, and will be leaving ten CPUs running. I have only started 80-82, and that only yesterday (sieving 2^1188+1 is still quite a lot of work, not to mention the linear algebra).
We're getting nearly three relations/Q on average - I must have done trial sieving over a particularly sparse patch - so probably eighty million Q will suffice and it would be sensible not to reserve beyond 125M. I hadn't expected to be able to get people with clusters involved with the sieving, if I had I would have picked a significantly bigger number. |
[QUOTE=bsquared;130425]
I'm working up the nerve to ask them about longer term usage of a single node... I'm thinking that a 16Gb, dual quad core X5365 would work nicely for post-processing. But I'm doubtful I'll be able to monopolize one box for so long :([/QUOTE] A 16GB dual quad-core is overkill for post-processing on numbers this 'small'; my hope is to over-sieve sufficiently that I can get a matrix which fits on a 4GB quad-core (we nearly managed this for 2,841- which was a noticeably bigger number). The matrix-production step needed an 8G machine for 2,841- or the 165-digit GNFS job of earlier this year, I have one of those available. The 16GB machine would be very interesting for the matrix-building step if we decide to try to beat Aoki's larger GNFS or smaller SNFS record, both of which are I think just about plausible targets for a lattice-sieving effort with substantial distributed resources (20 CPU-years or so), at least if msieve consistently works as much better than Aoki's code as I saw on 6,383+; the matrix-running step for those would be several months on a quad-core, so really has to run on a personal machine, but the matrix-build takes a bit more memory for no more than a week. |
[QUOTE=bsquared;130425]Well, I guess we'll have a small amount of overlap, which I don't think is any big deal. I'll submit my files as normal and we'll let msieve sort 'em out :)
I know you are very productive with it, but I don't know anything about the condor cluster you mention. Is there a website I can go to to read more about it? - ben.[/QUOTE] I didn't reserve that range; you did; I only ran those q's as an experiment, and don't plan on submitting the output (if it's still around). Also, duplicates make the filtering harder, and should usually be avoided. The first google entry on condor is the "condor project homepage" [url]www.cs.wisc.edu/condor/[/url] The other entries involve various vultures ("carrion-eaters"). The initial purpose of the program being to scavenge for spare idle cycles. It's also evolved into a scheduler on clusters. -Bruce |
[QUOTE=bdodson;130419] ... I didn't want to leave the impression that only the quadcores
were running. So the initial experiment was 2+5 jobs; I'm waiting for 5+10 (amd+xeon_quad) to finish 123M; then will try ... uhm, c.100+300 available, so maybe 20+60 jobs, 1M+3M (but on 3p536)? That sounds like the next sequence term would be 40+120, 2M+6M (these are 12-13 hour runs), along with 20 on the new quads. Hmm, 60+180+20 might be pushing things; maybe 50+150+20, which would double the 25% of Greg's max I was considering before the first jobs ran. -Bruce[/QUOTE] Uhm, post #17 was a report of completion of 92M-93M. It was intended as a note of the file having been uploaded (though I didn't explicitly say so). This is a report of completion on 123M-124M, also uploaded (92M hasn't been counted yet, it seems?). These complete my 3+512 reservations, for the moment. The relns in 123M are the first ones from condor (Opteron+old quadcore). Doesn't look to me like I'll need the mpi; at least, not for job submissions. Just a bit of shell scripts and some "sed -e" goes most of the way. I submitted 3M to the Opterons, 7M to the "old" quads, both over condor, with another 1M on the new quads. That's 60+140+20 cpus, which will take (well) under a day (presuming everything runs). The new number was supposed to need 130M, so 11M/day ... well, even 10M/day would be under two weeks for completely sieving 3p536. Guess we'll see (if you don't mind the off-topic posts). Combination of Greg's binary(s) and Tom's instructions (here and in the eleven-smooth thread) seems to have done to get me running --- guess the new un-condor-ed cluster helped for seeing what ought to happen. -Bruce (9M so far for Greg's 3,536+ c252; so 130-9 = 121M left to go) |
| All times are UTC. The time now is 22:04. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.