mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Factoring

Reply
 
Thread Tools
Old 2008-03-24, 20:39   #1
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

642410 Posts
Default 3^512+1

This is quite a large SNFS number with a decidedly awkward polynomial (no real roots, no algebraic roots modulo primes ==7 or ==11 mod 12, two roots modulo primes ==5 mod 12, six roots modulo some primes ==1 mod 12), and so the number of usable special-Q in a particular range is really rather low.

So this one does require the 32768x16384 per-Q sieving range that gnfs-lasieve4I15e offers, and parameter optimisation indicates that it does want 31-bit large primes and very large small primes. Feel free to go smaller (reduce rlim and alim) if you can't face ~400MB per core: 40M gets about 65% the yield of 80M, but the yield drops off very fast below that.

Code:
n: 2652879528384736294387787089866884113161756949676609780113021980279955578028580515829763316598420245173034168388765124717208315443806148182904105317960270313646866242717807445467423472021744641
skew: 1
c6: 9
c0: 1
Y1: -1
Y0: 35917545547686059365808220080151141317043
rlim: 80000000
alim: 80000000
lpbr: 31
lpba: 31
mfbr: 62
mfba: 62
rlambda: 2.6
alambda: 2.6
A range of one million Q will take about a week on a core2-2400 and produce about 2.3 million relations; we'll need somewhere between 180M and 200M, so around two core2 years.

Upload in the same way as the 2^1188+1 thread http://www.mersenneforum.org/showthread.php?t=10003; please call your files something like 3+512.27M-27.3M.bz2 so I don't confuse them with the 2^1188+1 ones. Not sure how much the sieving periods will overlap.

Reservations (between 40M and 125M, please)
**andi47 40-41 (3593082)
**andi47 41-42 (3507498)
andi47 42-43
bsquared 43-50
**wraithX 50-60 (33605887)
fivemack 67-72
**fivemack 72-80 (24015345 = 6047723+6049878+5985699+5932045)
**fivemack 80-82 (3104511+3040339)
**fivemack 82-90 (23116337 = 5827643+5795180+5791581+5701933)
**bdodson 90-91 (with gaps: 1431138 relations collected)
**bdodson 91-92 (no visible gaps; 2929210 relations)
**bdodson 92-93 (2962600)
**bsquared 93-94 (2878996)
**bsquared 94-96 (5816329)
**bsquared 96-97 (2886020)
**bsquared 97-98 (2873406)
**bsquared 98-101 (8550484)
**bsquared 101-115 (38580970)
**bsquared 115-120 (13328671)
**bsquared 120-123 (7915518)
**bdodson 123-124 (2636395)
**bsquared 124-125 (2615423)

relation counts ('large ideals' are >2500000)
Code:
02/04/2008 00:06  16212022 relations,  16088425 unique relations and about  38580610 large ideals
11/04/2008 00:15  42926881 relations,  42092537 unique relations and about  63275041 large ideals
29/04/2008 22:15 189611855 relations, 172703615 unique relations and about 96169746  large ideals
  weight of 11436921 cycles is about 743426050 (65.00/cycle)

Last fiddled with by fivemack on 2008-04-30 at 17:24 Reason: add some reported uploads
fivemack is offline   Reply With Quote
Old 2008-03-25, 09:36   #2
Andi47
 
Andi47's Avatar
 
Oct 2004
Austria

2×17×73 Posts
Default

reserving 40M - 41M, using rlim = 80M and alim = 40M

Edit: Memory use approx. 251 MB

Last fiddled with by Andi47 on 2008-03-25 at 09:41
Andi47 is offline   Reply With Quote
Old 2008-03-26, 01:32   #3
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

210 Posts
Default

Quote:
Originally Posted by fivemack View Post
Reservations
andi47 40-41
fivemack 80-82
reserving 90-91. There's something like 3.5Gb/core, so I'm using the
defaults. If I understand correctly, 10 ranges

Code:
./gnfs-lasieve4I15e -a 512.poly -f 90000000 -c 10000 -o 3+512.90M-90.01M
./gnfs-lasieve4I15e -a 512.poly -f 90010000 -c 10000 -o 3+512.90.01M-90.02M
./gnfs-lasieve4I15e -a 512.poly -f 90020000 -c 10000 -o 3+512.90.02M-90.03M
will do the first 100000, then 20 ranges with -c 45000 will do the other
900000. With 2 cpus left to fiddle with B1 = 850M. -Bruce
bdodson is offline   Reply With Quote
Old 2008-03-26, 02:57   #4
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

40016 Posts
Default

Quote:
Originally Posted by bdodson View Post
... 10 ranges

Code:
./gnfs-lasieve4I15e -a 512.poly -f 90000000 -c 10000 -o 3+512.90M-90.01M
./gnfs-lasieve4I15e -a 512.poly -f 90010000 -c 10000 -o 3+512.90.01M-90.02M
./gnfs-lasieve4I15e -a 512.poly -f 90020000 -c 10000 -o 3+512.90.02M-90.03M
do the first 100000, then 20 ranges do the other
900000. With 2 cpus left to fiddle with B1 = 850M. -Bruce
Not quite. top reports 28 cpus at 99%-100%, with the last two jobs
at 49% and 51%. Looks like one of the quads has 3 cores reserved
for the "head node". The timings seem to pick up. After a bit more
than an hour, nine of the 1st ten report

Code:
total yield: 14634, q=90015257 (0.26097 sec/rel)
total yield: 14422, q=90024421 (0.26437 sec/rel)
total yield: 14855, q=90035773 (0.26035 sec/rel)
total yield: 14589, q=90046001 (0.26246 sec/rel)
total yield: 14244, q=90055529 (0.26608 sec/rel)
total yield: 14754, q=90064421 (0.25815 sec/rel)
total yield: 14621, q=90074389 (0.25899 sec/rel)
total yield: 14192, q=90085753 (0.27056 sec/rel)
total yield: 14655, q=90095561 (0.25817 sec/rel)
But they started out slower; here's one with a late start:

total yield: 4095, q=90101129 (0.31718 sec/rel)

All of the initial timings were noticably above .3, but after ten
minutes ... hmmm, looks like I'll do better droping one of the 10K's,
then pick it up later. -Bruce
bdodson is offline   Reply With Quote
Old 2008-03-26, 09:35   #5
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

23×11×73 Posts
Default

Quote:
Originally Posted by bdodson
reserving 90-91
Excellent; I was wondering whether your enormous CPU resources were in a position to be used for lattice sieving, and clearly they are.

Just for my logistic convenience, would you mind concatenating the relation files into slightly larger blocks before uploading? One file for 90-91 would be a good deal easier to handle than thirty.

Don't worry about the fluctuations in the timing: those timings look very close to what I'm seeing here. They do fluctuate a bit, for reasons ranging from the vicissitudes of the distribution of prime numbers and of the skew of the lattices corresponding to the ideals up to memory bandwidth.

Last fiddled with by fivemack on 2008-03-26 at 09:36
fivemack is offline   Reply With Quote
Old 2008-03-26, 17:35   #6
frmky
 
frmky's Avatar
 
Jul 2003
So Cal

22·32·59 Posts
Default

Quote:
Originally Posted by bdodson View Post
reserving 90-91. There's something like 3.5Gb/core, so I'm using the
defaults. If I understand correctly, 10 ranges

Code:
./gnfs-lasieve4I15e -a 512.poly -f 90000000 -c 10000 -o 3+512.90M-90.01M
./gnfs-lasieve4I15e -a 512.poly -f 90010000 -c 10000 -o 3+512.90.01M-90.02M
./gnfs-lasieve4I15e -a 512.poly -f 90020000 -c 10000 -o 3+512.90.02M-90.03M
will do the first 100000, then 20 ranges with -c 45000 will do the other
900000. With 2 cpus left to fiddle with B1 = 850M. -Bruce
I created an MPI version (making the least possible changes) for sieving 12,241- on the Lonestar cluster. I can package and send you the source if you like. It still records each range to a separate file, but it's easy to cat them together at the end. The only real advantage is that you have one command to run rather than one for each processor. The caveat is that the lattice siever occasionally gets stuck on a special-q. This happened twice in a range of 170M q's in the sieving of 12,241-, and this stalls all of the other processes waiting for the one to complete.

Greg
frmky is online now   Reply With Quote
Old 2008-03-26, 17:43   #7
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

102410 Posts
Default

Quote:
Originally Posted by fivemack View Post
Excellent; I was wondering whether your enormous CPU resources were in a position to be used for lattice sieving, and clearly they are.

Just for my logistic convenience, would you mind concatenating the relation files into slightly larger blocks before uploading? One file for 90-91 would be a good deal easier to handle than thirty.
All but one range finished in 12hrs, that one took an extra 30min; perhaps
there was an effect from another user. My problem with sieving has been
that almost everything here is scheduled though condor, which I haven't yet
figured out how to get to run sieving. The new machine is a replacement
for an older sgi that had Itanium2 chips (and was often also over-busy,
without me).

I'll cat things up (and gzip) as soon as I can track down where the job
I killed left off, and the reset started. I hadn't considered that the
relns are in hex, which makes finding the last q that finished less
transparent.

Another point of puzzlement: is there a factorbase stored somewhere?
Doesn't appear to be one in my filespace (much less 30 separate copies,
which would not have been good). I checked /tmp, but (it's a new
machine and) there's hardly anything there. Condor would be much
simpler (with a whole lot more cores) if it weren't for thinking about the
factorbase. -Bruce
bdodson is offline   Reply With Quote
Old 2008-03-26, 18:04   #8
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

642410 Posts
Default

Quote:
Originally Posted by bdodson View Post
I'll cat things up (and gzip) as soon as I can track down where the job
I killed left off, and the reset started. I hadn't considered that the
relns are in hex, which makes finding the last q that finished less
transparent.
If you're sieving on the algebraic side, the special-q is always the last hex number on the line:
Code:
for i in *; do j=`tail -n 2 $i | head -n 1 | awk -F, '{print $NF}'`; k=`echo "16i $j pq"|dc`; echo "File $i $k"; done
will list them nicely. If you add '| grep -v "9[89]..$"' to the end, it'll do a fairly good job of listing ones that didn't run to the end of their 10000-section.

Quote:
Another point of puzzlement: is there a factorbase stored somewhere?
No, the factorbase is not saved to disc; it's regenerated from scratch each time you start the siever. This takes less than a minute and generally my sieve jobs last between three days and a week.

What are the properties that a job has to have to play nicely with condor?

Last fiddled with by fivemack on 2008-03-26 at 18:06
fivemack is offline   Reply With Quote
Old 2008-03-26, 18:08   #9
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

144308 Posts
Default

Quote:
Originally Posted by frmky View Post
I created an MPI version (making the least possible changes) for sieving 12,241- on the Lonestar cluster
The Lonestar cluster seems quite an exciting machine: http://www.tacc.utexas.edu/services/...ides/lonestar/ suggests 10^4 2.66GHz processors. How many were you able to use for the sieving, what small and large primes did you use, and how long did it take?
fivemack is offline   Reply With Quote
Old 2008-03-26, 21:41   #10
frmky
 
frmky's Avatar
 
Jul 2003
So Cal

22×32×59 Posts
Default

Quote:
Originally Posted by fivemack View Post
The Lonestar cluster seems quite an exciting machine: http://www.tacc.utexas.edu/services/...ides/lonestar/ suggests 10^4 2.66GHz processors. How many were you able to use for the sieving, what small and large primes did you use, and how long did it take?
The cluster is very busy, but I was able to use up to 400 processors at a time. I used fb limits of 70M on each side and 31-bit large primes. I sieved q from 40M to 200M using gnfs-lasieve4I15e on the rational side. The sieving took a total of a bit over 30,000 CPU hours. This took about 2 weeks real time mainly because I had to figure out how to use the cluster, MPI-ize the siever, and deal with the cluster going down a few times during those two weeks due to file system issues.

Greg
frmky is online now   Reply With Quote
Old 2008-03-27, 09:33   #11
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

210 Posts
Default

Quote:
Originally Posted by fivemack View Post
...
Upload in the same way as the 2^1188+1 thread http://www.mersenneforum.org/showthread.php?t=10003; ...

Reservations
andi47 40-41
fivemack 80-82
bdodson 90-91
The anonymous server doesn't seem to accept sftp for anonymous@...
Fortunately, they haven't yet gotten to the part of the installation
where they disable ftp (which they do on the compute server this is
upgrading; away from Itanium2, at last!).

While I'm waiting to hear from Greg and/or Richard about current
NFSNET plans, I'll take 91-93. -Bruce

(fairly sure Lehigh won't be amused to have their new server grouped
into Womack/mersenneforum; while we're listed explicitly in Sam's
pages as ... well, some sort of share in nfsnet admin.)
bdodson is offline   Reply With Quote
Reply

Thread Tools


All times are UTC. The time now is 15:39.


Fri Aug 6 15:39:04 UTC 2021 up 14 days, 10:08, 1 user, load averages: 2.73, 2.62, 2.73

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.