mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Factoring

Reply
 
Thread Tools
Old 2008-03-31, 03:05   #23
WraithX
 
WraithX's Avatar
 
Mar 2006

25·3·5 Posts
Default

Quote:
Originally Posted by bsquared View Post
That's just what I was looking for. Thanks for that.

Last fiddled with by WraithX on 2008-03-31 at 03:06
WraithX is online now   Reply With Quote
Old 2008-03-31, 05:12   #24
WraithX
 
WraithX's Avatar
 
Mar 2006

25×3×5 Posts
Default

So, I'm trying "-f 50000000", and it says "Special q lower bound 50000000 below FB bound 8e+007".

So, my guess is to set either alim or rlim less than 80000000. But which one, or both? And to what lower value should I set it? Should I change any other values? I'm sorry for the amateur questions, but I'd like to get this right.
WraithX is online now   Reply With Quote
Old 2008-03-31, 06:07   #25
Andi47
 
Andi47's Avatar
 
Oct 2004
Austria

2·17·73 Posts
Default

Quote:
Originally Posted by WraithX View Post
So, I'm trying "-f 50000000", and it says "Special q lower bound 50000000 below FB bound 8e+007".

So, my guess is to set either alim or rlim less than 80000000. But which one, or both? And to what lower value should I set it? Should I change any other values? I'm sorry for the amateur questions, but I'd like to get this right.
Set alim to 50000000. You can keep the other values as they are.
Andi47 is offline   Reply With Quote
Old 2008-03-31, 18:30   #26
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

210 Posts
Default

Quote:
Originally Posted by bdodson View Post
... Wish me luck with getting relns here
within the next day or so. I seem to have lasieve running
on the Opteron cluster under condor (5hrs of 12hrs or so);
except that I won't know whether the output is "tranferred"
back from the local node to the condor master until the
range finishes (or, otherwise, just vanishes).

... so as I was saying, wish me
luck on today's 2+5 cpus. -Bruce
Tom's doing a great job. Here's

Code:
-rw-r--r--  1 bad0 faculty 14593677 Mar 30 08:19 3+512.123.00M-123.05M
-rw-r--r--  1 bad0 faculty 15331669 Mar 30 09:13 3+512.123.05M-123.10M
-rw-r--r--  1 bad0 faculty 15023040 Mar 30 08:45 3+512.123.10M-123.15M
-rw-r--r--  1 bad0 faculty 14775508 Mar 30 08:23 3+512.123.15M-123.20M
-rw-r--r--  1 bad0 faculty 14795865 Mar 30 08:43 3+512.123.20M-123.25M
...
 wc -l 3+512.123*

  130277 3+512.123.00M-123.05M
  136873 3+512.123.05M-123.10M
  134111 3+512.123.10M-123.15M
  131864 3+512.123.15M-123.20M
  132083 3+512.123.20M-123.25M
  665208 total
which appears to be 665,208 relns from the old quadcores (40*2*4 = 320
cores). The Opterons ran too (2*59 = 118 cores, when they're all up, which
is rare); 258K from bsquared's range (my fault). I'll set the rest of 123M,
then have a look at Greg's large nfsnet(?) number.

On the data, I have lots of space on the Opteron cluster (which also
hosts the scheduler for the "old" quads). Good thing, there's no ftp
(only sftp); automated ftp seems unlikely. -Bruce
bdodson is offline   Reply With Quote
Old 2008-03-31, 18:36   #27
bsquared
 
bsquared's Avatar
 
"Ben"
Feb 2007

1101110000012 Posts
Default

Quote:
Originally Posted by bdodson View Post
258K from bsquared's range (my fault).
Where exactly did you overlap? It seems you're well outside my range (93 to 100M), unless I'm misunderstanding something.

Not a problem if so, and if I'm not already done with the overlapped part then I'll just not do that particular part. I'm done up through 96M as of now.
bsquared is offline   Reply With Quote
Old 2008-04-01, 00:03   #28
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

210 Posts
Default

Quote:
Originally Posted by bsquared View Post
Where exactly did you overlap? It seems you're well outside my range (93 to 100M), unless I'm misunderstanding something.
I was fairly sure the jobs wouldn't run (this is a condor thing; ask Richard
or Greg!). The range was 93.0-93.1. I reserved 123M after seeing your
reservation of 93M-100M (nice!). My first jobs were set on the Opterons
(without checking or reserving); when I did check, I reserved 123M and set
the next quadcore jobs there. I wouldn't have mentioned running them at all,
except that I didn't want to leave the impression that only the quadcores
were running. So the initial experiment was 2+5 jobs; I'm waiting for
5+10 (amd+xeon_quad) to finish 123M; then will try ... uhm, c.100+300
available, so maybe 20+60 jobs, 1M+3M (but on 3p536)? That sounds like
the next sequence term would be 40+120, 2M+6M (these are 12-13 hour
runs), along with 20 on the new quads. Hmm, 60+180+20 might be pushing
things; maybe 50+150+20, which would double the 25% of Greg's max I
was considering before the first jobs ran. -Bruce
bdodson is offline   Reply With Quote
Old 2008-04-01, 01:24   #29
bsquared
 
bsquared's Avatar
 
"Ben"
Feb 2007

7·503 Posts
Default

Quote:
Originally Posted by bdodson View Post
I was fairly sure the jobs wouldn't run (this is a condor thing; ask Richard
or Greg!). The range was 93.0-93.1. I reserved 123M after seeing your
reservation of 93M-100M (nice!). My first jobs were set on the Opterons
(without checking or reserving); when I did check, I reserved 123M and set
the next quadcore jobs there. I wouldn't have mentioned running them at all,
except that I didn't want to leave the impression that only the quadcores
were running. So the initial experiment was 2+5 jobs; I'm waiting for
5+10 (amd+xeon_quad) to finish 123M; then will try ... uhm, c.100+300
available, so maybe 20+60 jobs, 1M+3M (but on 3p536)? That sounds like
the next sequence term would be 40+120, 2M+6M (these are 12-13 hour
runs), along with 20 on the new quads. Hmm, 60+180+20 might be pushing
things; maybe 50+150+20, which would double the 25% of Greg's max I
was considering before the first jobs ran. -Bruce
Well, I guess we'll have a small amount of overlap, which I don't think is any big deal. I'll submit my files as normal and we'll let msieve sort 'em out :)

I know you are very productive with it, but I don't know anything about the condor cluster you mention. Is there a website I can go to to read more about it? I'm running my ranges on a small cluster of dual dual core xeon 5160's and dual quad core xeon X5365's, available (nights and weekends, mostly) due to the generosity of my employers. Running part time I can do about 1M special-q a day on this number.

I'm working up the nerve to ask them about longer term usage of a single node... I'm thinking that a 16Gb, dual quad core X5365 would work nicely for post-processing. But I'm doubtful I'll be able to monopolize one box for so long :(

- ben.
bsquared is offline   Reply With Quote
Old 2008-04-01, 10:13   #30
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

23·11·73 Posts
Default Explanation of very long reservation

I realize that I've made an enormous reservation, about 20 CPU-weeks of work; the reason is that I'm going off for a two-week vacation in a couple of weeks, and will be leaving ten CPUs running. I have only started 80-82, and that only yesterday (sieving 2^1188+1 is still quite a lot of work, not to mention the linear algebra).

We're getting nearly three relations/Q on average - I must have done trial sieving over a particularly sparse patch - so probably eighty million Q will suffice and it would be sensible not to reserve beyond 125M. I hadn't expected to be able to get people with clusters involved with the sieving, if I had I would have picked a significantly bigger number.
fivemack is offline   Reply With Quote
Old 2008-04-01, 10:34   #31
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

23×11×73 Posts
Default

Quote:
Originally Posted by bsquared View Post
I'm working up the nerve to ask them about longer term usage of a single node... I'm thinking that a 16Gb, dual quad core X5365 would work nicely for post-processing. But I'm doubtful I'll be able to monopolize one box for so long :(
A 16GB dual quad-core is overkill for post-processing on numbers this 'small'; my hope is to over-sieve sufficiently that I can get a matrix which fits on a 4GB quad-core (we nearly managed this for 2,841- which was a noticeably bigger number). The matrix-production step needed an 8G machine for 2,841- or the 165-digit GNFS job of earlier this year, I have one of those available.

The 16GB machine would be very interesting for the matrix-building step if we decide to try to beat Aoki's larger GNFS or smaller SNFS record, both of which are I think just about plausible targets for a lattice-sieving effort with substantial distributed resources (20 CPU-years or so), at least if msieve consistently works as much better than Aoki's code as I saw on 6,383+; the matrix-running step for those would be several months on a quad-core, so really has to run on a personal machine, but the matrix-build takes a bit more memory for no more than a week.
fivemack is offline   Reply With Quote
Old 2008-04-01, 16:43   #32
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

210 Posts
Default

Quote:
Originally Posted by bsquared View Post
Well, I guess we'll have a small amount of overlap, which I don't think is any big deal. I'll submit my files as normal and we'll let msieve sort 'em out :)

I know you are very productive with it, but I don't know anything about the condor cluster you mention. Is there a website I can go to to read more about it?
- ben.
I didn't reserve that range; you did; I only ran those q's as an experiment,
and don't plan on submitting the output (if it's still around). Also, duplicates
make the filtering harder, and should usually be avoided.

The first google entry on condor is the "condor project homepage"

www.cs.wisc.edu/condor/

The other entries involve various vultures ("carrion-eaters"). The initial
purpose of the program being to scavenge for spare idle cycles. It's also
evolved into a scheduler on clusters. -Bruce
bdodson is offline   Reply With Quote
Old 2008-04-02, 02:40   #33
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

40016 Posts
Default

Quote:
Originally Posted by bdodson View Post
... I didn't want to leave the impression that only the quadcores
were running. So the initial experiment was 2+5 jobs; I'm waiting for
5+10 (amd+xeon_quad) to finish 123M; then will try ... uhm, c.100+300
available, so maybe 20+60 jobs, 1M+3M (but on 3p536)? That sounds like
the next sequence term would be 40+120, 2M+6M (these are 12-13 hour
runs), along with 20 on the new quads. Hmm, 60+180+20 might be pushing
things; maybe 50+150+20, which would double the 25% of Greg's max I
was considering before the first jobs ran. -Bruce
Uhm, post #17 was a report of completion of 92M-93M. It was intended
as a note of the file having been uploaded (though I didn't explicitly say so).
This is a report of completion on 123M-124M, also uploaded (92M hasn't been
counted yet, it seems?). These complete my 3+512 reservations, for the
moment. The relns in 123M are the first ones from condor (Opteron+old
quadcore).

Doesn't look to me like I'll need the mpi; at least, not for job submissions.
Just a bit of shell scripts and some "sed -e" goes most of the way. I
submitted 3M to the Opterons, 7M to the "old" quads, both over condor,
with another 1M on the new quads. That's 60+140+20 cpus, which will
take (well) under a day (presuming everything runs). The new number
was supposed to need 130M, so 11M/day ... well, even 10M/day would
be under two weeks for completely sieving 3p536. Guess we'll see (if
you don't mind the off-topic posts). Combination of Greg's binary(s) and
Tom's instructions (here and in the eleven-smooth thread) seems to have
done to get me running --- guess the new un-condor-ed cluster helped for
seeing what ought to happen. -Bruce

(9M so far for Greg's 3,536+ c252; so 130-9 = 121M left to go)
bdodson is offline   Reply With Quote
Reply

Thread Tools


All times are UTC. The time now is 15:39.


Fri Aug 6 15:39:04 UTC 2021 up 14 days, 10:08, 1 user, load averages: 2.73, 2.62, 2.73

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.