![]() |
|
|
#1 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
642410 Posts |
This is quite a large SNFS number with a decidedly awkward polynomial (no real roots, no algebraic roots modulo primes ==7 or ==11 mod 12, two roots modulo primes ==5 mod 12, six roots modulo some primes ==1 mod 12), and so the number of usable special-Q in a particular range is really rather low.
So this one does require the 32768x16384 per-Q sieving range that gnfs-lasieve4I15e offers, and parameter optimisation indicates that it does want 31-bit large primes and very large small primes. Feel free to go smaller (reduce rlim and alim) if you can't face ~400MB per core: 40M gets about 65% the yield of 80M, but the yield drops off very fast below that. Code:
n: 2652879528384736294387787089866884113161756949676609780113021980279955578028580515829763316598420245173034168388765124717208315443806148182904105317960270313646866242717807445467423472021744641 skew: 1 c6: 9 c0: 1 Y1: -1 Y0: 35917545547686059365808220080151141317043 rlim: 80000000 alim: 80000000 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.6 alambda: 2.6 Upload in the same way as the 2^1188+1 thread http://www.mersenneforum.org/showthread.php?t=10003; please call your files something like 3+512.27M-27.3M.bz2 so I don't confuse them with the 2^1188+1 ones. Not sure how much the sieving periods will overlap. Reservations (between 40M and 125M, please) **andi47 40-41 (3593082) **andi47 41-42 (3507498) andi47 42-43 bsquared 43-50 **wraithX 50-60 (33605887) fivemack 67-72 **fivemack 72-80 (24015345 = 6047723+6049878+5985699+5932045) **fivemack 80-82 (3104511+3040339) **fivemack 82-90 (23116337 = 5827643+5795180+5791581+5701933) **bdodson 90-91 (with gaps: 1431138 relations collected) **bdodson 91-92 (no visible gaps; 2929210 relations) **bdodson 92-93 (2962600) **bsquared 93-94 (2878996) **bsquared 94-96 (5816329) **bsquared 96-97 (2886020) **bsquared 97-98 (2873406) **bsquared 98-101 (8550484) **bsquared 101-115 (38580970) **bsquared 115-120 (13328671) **bsquared 120-123 (7915518) **bdodson 123-124 (2636395) **bsquared 124-125 (2615423) relation counts ('large ideals' are >2500000) Code:
02/04/2008 00:06 16212022 relations, 16088425 unique relations and about 38580610 large ideals 11/04/2008 00:15 42926881 relations, 42092537 unique relations and about 63275041 large ideals 29/04/2008 22:15 189611855 relations, 172703615 unique relations and about 96169746 large ideals weight of 11436921 cycles is about 743426050 (65.00/cycle) Last fiddled with by fivemack on 2008-04-30 at 17:24 Reason: add some reported uploads |
|
|
|
|
|
#2 |
|
Oct 2004
Austria
2×17×73 Posts |
reserving 40M - 41M, using rlim = 80M and alim = 40M
Edit: Memory use approx. 251 MB Last fiddled with by Andi47 on 2008-03-25 at 09:41 |
|
|
|
|
|
#3 |
|
Jun 2005
lehigh.edu
210 Posts |
reserving 90-91. There's something like 3.5Gb/core, so I'm using the
defaults. If I understand correctly, 10 ranges Code:
./gnfs-lasieve4I15e -a 512.poly -f 90000000 -c 10000 -o 3+512.90M-90.01M ./gnfs-lasieve4I15e -a 512.poly -f 90010000 -c 10000 -o 3+512.90.01M-90.02M ./gnfs-lasieve4I15e -a 512.poly -f 90020000 -c 10000 -o 3+512.90.02M-90.03M 900000. With 2 cpus left to fiddle with B1 = 850M. -Bruce |
|
|
|
|
|
#4 | |
|
Jun 2005
lehigh.edu
40016 Posts |
Quote:
at 49% and 51%. Looks like one of the quads has 3 cores reserved for the "head node". The timings seem to pick up. After a bit more than an hour, nine of the 1st ten report Code:
total yield: 14634, q=90015257 (0.26097 sec/rel) total yield: 14422, q=90024421 (0.26437 sec/rel) total yield: 14855, q=90035773 (0.26035 sec/rel) total yield: 14589, q=90046001 (0.26246 sec/rel) total yield: 14244, q=90055529 (0.26608 sec/rel) total yield: 14754, q=90064421 (0.25815 sec/rel) total yield: 14621, q=90074389 (0.25899 sec/rel) total yield: 14192, q=90085753 (0.27056 sec/rel) total yield: 14655, q=90095561 (0.25817 sec/rel) total yield: 4095, q=90101129 (0.31718 sec/rel) All of the initial timings were noticably above .3, but after ten minutes ... hmmm, looks like I'll do better droping one of the 10K's, then pick it up later. -Bruce |
|
|
|
|
|
|
#5 | |
|
(loop (#_fork))
Feb 2006
Cambridge, England
23×11×73 Posts |
Quote:
Just for my logistic convenience, would you mind concatenating the relation files into slightly larger blocks before uploading? One file for 90-91 would be a good deal easier to handle than thirty. Don't worry about the fluctuations in the timing: those timings look very close to what I'm seeing here. They do fluctuate a bit, for reasons ranging from the vicissitudes of the distribution of prime numbers and of the skew of the lattices corresponding to the ideals up to memory bandwidth. Last fiddled with by fivemack on 2008-03-26 at 09:36 |
|
|
|
|
|
|
#6 | |
|
Jul 2003
So Cal
22·32·59 Posts |
Quote:
Greg |
|
|
|
|
|
|
#7 | |
|
Jun 2005
lehigh.edu
102410 Posts |
Quote:
there was an effect from another user. My problem with sieving has been that almost everything here is scheduled though condor, which I haven't yet figured out how to get to run sieving. The new machine is a replacement for an older sgi that had Itanium2 chips (and was often also over-busy, without me). I'll cat things up (and gzip) as soon as I can track down where the job I killed left off, and the reset started. I hadn't considered that the relns are in hex, which makes finding the last q that finished less transparent. Another point of puzzlement: is there a factorbase stored somewhere? Doesn't appear to be one in my filespace (much less 30 separate copies, which would not have been good). I checked /tmp, but (it's a new machine and) there's hardly anything there. Condor would be much simpler (with a whole lot more cores) if it weren't for thinking about the factorbase. -Bruce |
|
|
|
|
|
|
#8 | ||
|
(loop (#_fork))
Feb 2006
Cambridge, England
642410 Posts |
Quote:
Code:
for i in *; do j=`tail -n 2 $i | head -n 1 | awk -F, '{print $NF}'`; k=`echo "16i $j pq"|dc`; echo "File $i $k"; done
Quote:
What are the properties that a job has to have to play nicely with condor? Last fiddled with by fivemack on 2008-03-26 at 18:06 |
||
|
|
|
|
|
#9 | |
|
(loop (#_fork))
Feb 2006
Cambridge, England
144308 Posts |
Quote:
|
|
|
|
|
|
|
#10 | |
|
Jul 2003
So Cal
22×32×59 Posts |
Quote:
Greg |
|
|
|
|
|
|
#11 | |
|
Jun 2005
lehigh.edu
210 Posts |
Quote:
Fortunately, they haven't yet gotten to the part of the installation where they disable ftp (which they do on the compute server this is upgrading; away from Itanium2, at last!). While I'm waiting to hear from Greg and/or Richard about current NFSNET plans, I'll take 91-93. -Bruce (fairly sure Lehigh won't be amused to have their new server grouped into Womack/mersenneforum; while we're listed explicitly in Sam's pages as ... well, some sort of share in nfsnet admin.) |
|
|
|
|