mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Cunningham Tables (https://www.mersenneforum.org/forumdisplay.php?f=51)
-   -   Distributed finishing for 2,1870L (https://www.mersenneforum.org/showthread.php?t=15249)

R.D. Silverman 2011-02-11 23:13

Distributed finishing for 2,1870L
 
1 Attachment(s)
It appears that I will need help to finish sieving 2,1870L. I just
don't have the resources.

I sent the relations that I had to Serge, but a lot more is needed.

I have already sieved special-q all the way to 452million, and the yield rate
is dropping. I doubt whether my siever can gather enough relations.

Any volunteers?
__________________________

[COLOR=green]EDIT (S.B): Here are the instructions:[/COLOR]

* Save file <<t.poly>>
[CODE]# sieve with 16e -r from 90 to 120M, in ranges
# Command line: gnfs-lasieve4I16e -v -r t.poly -f $start -c 1000000
n: 16995692987522455651754339410455320150093771210144273643775083936188200124843949967119977515852759358871709763714726542633958784170913772900370407491298241066753915069723640845561
Y0: -196159429230833773869868419475239575503198607639501078529
Y1: 9903520314283042199192993792
skew: 2.0
c4: 1
c3: -2
c2: -6
c1: 12
c0: -4
type: snfs
lpbr: 31
lpba: 30
mfbr: 62
mfba: 60
rlambda: 2.55
alambda: 2.55
rlim: 120000000
alim: 16777215[/CODE]* Get a gnfs-lasieve4I16e (preferably a 64-bit linux binary). E.g. this one:
[ATTACH]6223[/ATTACH] (but if it won't work on your system, search for other binaries on the forum or build from source)
* Reserve a range here, in chunks of 1M (this will serve as $start in the command-line). Each 1M range will take ~1.5M CPU-seconds on a 3GHz 64-bit CPU, and will produce ~200Mb of data after compression (400Mb plain)
* Run gnfs-lasieve4I16e -v -r t.poly -f $start -c 1000000
(or split in smaller ranges, -f controls start, -c controls length of the range; both plain numbers, no 'M's or 'e's)
* The memory requirement will be modest - 300-400Mb per process
* Concatenate result files (they will have names t.poly.lasieve-1.<number>-<number>), gzip (or bzip, 7zip, tar cvz, etc) and post at sendspace, dropbox or for very large files PM Batalov for direct sftp login.

Posprocessing will be done by Batalov.

Reservations:
[CODE]up to 450M R.D. Silverman (own siever) DONE 75M unique relations (119M raw)
------ free relations 3.657M
[COLOR=green]90-91M Batalov DONE 3.8M relns[/COLOR]
[COLOR=green]91-92M jrk DONE 3.98M relns[/COLOR]
[COLOR=green]92-94M jyb DONE 7.47M relns[/COLOR]
[COLOR=green]94-100M bsquared DONE 22.7M relns[/COLOR]
[COLOR=green]100-101M xilman DONE 3.78M relns[/COLOR]
[COLOR=green]101-102M fivemack DONE 3.84M relns[/COLOR]
[COLOR=green]102-103M xilman DONE 3.85M relns[/COLOR]
[COLOR=green]103-104M xilman DONE 3.85M relns[/COLOR]
[COLOR=green]104-110M bsquared DONE 22229942 unique, 175378 dup.[/COLOR]
[COLOR=green]110-112.4M fivemack DONE 9.38M relns[/COLOR]
[COLOR=green]112.4-113M fivemack DONE 2.35M relns[/COLOR]
[COLOR=green]113-114M bsquared DONE 3.87M relns[/COLOR]
#this lot should suffice[/CODE]

Batalov 2011-02-11 23:32

Here is a very brief digest of the existing relation set, that we've discussed and I will repost here:

* the [URL="http://www.mersenneforum.org/showthread.php?t=14069"]FB lims[/URL] are 14.5M / 86M and approximately 30/31-bit LP lims;
* usually that parameter set would yield a matrix with around 150M unique relations (possibly less, but a larger matrix);
* currently, there are 78,573,143 unique rels (with free rels included)
[FONT=Arial Narrow]=====remdups_out.txt=====
Found 78132102 unique, 44165546 duplicate, and 0 bad relations. (~122M raw relations)
[/FONT]* filtering is at this point now:
[FONT=Arial Narrow]Fri Feb 11 05:20:05 2011 reading all ideals from disk
Fri Feb 11 05:20:34 2011 memory use: 3042.4 MB
Fri Feb 11 05:21:02 2011 keeping 103773081 ideals with weight <= 200, target excess is 418036
Fri Feb 11 05:21:30 2011 commencing in-memory singleton removal
Fri Feb 11 05:21:53 2011 begin with 78573143 relations and 103773081 unique ideals
Fri Feb 11 05:22:57 2011 reduce to 33899 relations and 2 ideals in 11 passes
Fri Feb 11 05:22:57 2011 max relations containing the same ideal: 2
[/FONT][FONT=Verdana]* sieving on the other side will not help (this is a quartic), probably a 15e re-sieving (or even 16e?) will be needed; I can simulate, remdups with the existing set to estimate quasi-unique additional yields and will post later.[/FONT]

--Serge

R.D. Silverman 2011-02-11 23:49

[QUOTE=Batalov;252205]Here is a very brief digest of the existing relation set, that we've discussed and I will repost here:

* the [URL="http://www.mersenneforum.org/showthread.php?t=14069"]FB lims[/URL] are 14.5M / 86M and approximately 30/31-bit LP lims;
* usually that parameter set would yield a matrix with around 150M unique relations (possibly less, but a larger matrix);
* currently, there are 78,573,143 unique rels (with free rels included)
[FONT=Arial Narrow]=====remdups_out.txt=====
Found 78132102 unique, 44165546 duplicate, and 0 bad relations. (~122M raw relations)
[/FONT]* filtering is at this point now:
[FONT=Arial Narrow]Fri Feb 11 05:20:05 2011 reading all ideals from disk
Fri Feb 11 05:20:34 2011 memory use: 3042.4 MB
Fri Feb 11 05:21:02 2011 keeping 103773081 ideals with weight <= 200, target excess is 418036
Fri Feb 11 05:21:30 2011 commencing in-memory singleton removal
Fri Feb 11 05:21:53 2011 begin with 78573143 relations and 103773081 unique ideals
Fri Feb 11 05:22:57 2011 reduce to 33899 relations and 2 ideals in 11 passes
Fri Feb 11 05:22:57 2011 max relations containing the same ideal: 2
[/FONT][FONT=Verdana]* sieving on the other side will not help (this is a quartic), probably a 15e re-sieving (or even 16e?) will be needed; I can simulate, remdups with the existing set to estimate quasi-unique additional yields and will post later.[/FONT]

--Serge[/QUOTE]

Some additional data.

I was using a sieve area of 10K x 20K per special q. Results
show that this was too small. yield per q was too low.


Currently, for q near 450M, I am getting just under 4 relations/q.

Rather than proceed with sieving q > 450million, it will probably be
better to resieve some of the smaller q.

I will finish sieving all q up to 450M this weekend.

Batalov 2011-02-12 00:56

For comparison, I found the sibling 2,1870M's logs (courtesy of B.Dodson's significant oversieving; long story short, it was easier to fire and forget than stop at a intermediate point):

Pre-simmed recipe (with experimental use of 16e [strike]and 3LP[/strike]):
[QUOTE]Please sieve with 16e -r from 60 to 110-120M, expect 165M+ unique rels. After 110M, remdups and if already more than 170-180M unique rels, then stop, else add 110-120M.
[/QUOTE]

Result:
[QUOTE="bdodson"]On 2M1870, as usual, I didn't get a chance to pause and count, just ran the entire range 60M-120M, towards "expect 165M+ unique". That got me "Found 219,351,522 unique with 19,798,477 duplicates".
TD=100 really crushed this filtering job,
[FONT=Arial Narrow]Sun Nov 14 12:37:59 2010 matrix is 6023342 x 6023575 (2242.2 MB) with weight 565777368 (93.93/col)[/FONT]
[FONT=Arial Narrow]Sun Nov 14 12:37:59 2010 sparse part has weight 527556294 (87.58/col)[/FONT]
[FONT=Arial Narrow]...[/FONT]
[FONT=Arial Narrow]Sun Nov 14 12:38:13 2010 memory use: 2790.2 MB[/FONT]
[FONT=Arial Narrow]Sun Nov 14 12:38:57 2010 linear algebra at 0.0%, ETA 46h 6m [!!][/FONT]
[FONT=Arial Narrow]/and then it was done around ETA/[/FONT]
[/QUOTE]
16e was an overshot, but the redundancy was very low as a result. It was a fun experiment.
Possibly 16e could be used again, here, for finishing (these are virtually identical projects). I will sim over the weekend (I cannot significantly sieve, 4Intel+6Phenom cores is all I got, but I can sim).

[COLOR=darkgreen]EDIT: not 3LP. Here's what it was:[/COLOR]
[COLOR=darkgreen][code]# sieve with 16e -r from 60 to 110-120M, expect 165M+ unique rels
n: 1387312376442199554837407296900851895433665230080527991970122352522509034451214731923682531140863318446032709537489490131868927679840546823810213417373743367475664367890147487119660449174892741
Y0: -196159429230833773869868419475239575503198607639501078529
Y1: 9903520314283042199192993792
skew: 2.0
c4: 1
c3: 2
c2: -6
c1: -12
c0: -4
type: snfs
lpbr: 31
lpba: 30
mfbr: 62
mfba: 60
rlambda: 2.55
alambda: 2.55
rlim: 134000000
alim: 33554431[/code][/COLOR]

bsquared 2011-02-12 00:56

I can help. Batalov will you be coordinating things?

Batalov 2011-02-12 01:09

Can do. If you would be willing to do all, then you won't need sendspace - I'll open you a sftp entry for the results into the compute node.
Let me prepare one large workunit and post it here. You would need to be prepared for a few hundred core-days.

xilman 2011-02-12 01:28

I should be able to help. Please give me fairly clear instructions on what I need to do.


Paul

Batalov 2011-02-12 02:32

I will run tests, prepare a desired target range (and tentatively time it), and then post here. The set up will very similar to distributed project templates from the past. E.g. [URL="http://www.mersenneforum.org/showthread.php?t=14191"]like this[/URL]. In short, one command-line, run many times on as many nodes you have access to (or qsub'bed), then the results gzipped-or-bzipped-or-7zipped (your choice) and sendspace'd or (let's insert a plug for trolls here) [URL="http://www.mersenneforum.org/showthread.php?t=15234"]dropbox[/URL]'d.

[strike]I will sim now. Give me a few hours.[/strike]

[COLOR=green]Instructions posted in Post #1. [/COLOR]
[COLOR=green]Please reserve. Each 1M chunk will take 1.5M CPU seconds (~420 hours) on a 64-bit linux system with a 3GHz CPU, or ~630 hours on a 2GHz CPU, or twice as much on a 32-bit system.[/COLOR]

Batalov 2011-02-12 04:46

Tested (they work well with existing set; FBlims are slightly increased, so that we would get new relations even in the worst case). Posted.

Will delete reservation messages and record in post #1.
Estimate is about ~600 core-days (+/- 50% depending on what CPUs will come to play). With estimated 50 cores participating, let's try to wrap it in two weeks (so, plase don't reserve a month worth of work).

R.D. Silverman 2011-02-12 14:43

[QUOTE=Batalov;252228]Tested (they work well with existing set; FBlims are slightly increased, so that we would get new relations even in the worst case). Posted.

Will delete reservation messages and record in post #1.
Estimate is about ~600 core-days (+/- 50% depending on what CPUs will come to play). With estimated 50 cores participating, let's try to wrap it in two weeks (so, plase don't reserve a month worth of work).[/QUOTE]

I have a total of ~25 cores, most of them at night only. Would
you like me to keep sieving? (this is why it was taking so long!)

Batalov 2011-02-13 05:54

[QUOTE=R.D. Silverman;252253]I have a total of ~25 cores, most of them at night only. Would
you like me to keep sieving? (this is why it was taking so long!)[/QUOTE]
Yes, just allow for time in mail.
Thanks.

xilman 2011-02-13 10:52

Slight modification for a multi-core machine
 
I've a multi-core machine and will be running several instances in parallel in the same directory. The command given in the first post doesn't work too well in that environment and I changed it to:
[code]#!/bin/sh
../gnfs-lasieve4I16e -v -r t.poly -f 100000000 -c 125000 -o t.poly.lasieve-1.100000000-100125000 &
../gnfs-lasieve4I16e -v -r t.poly -f 100125000 -c 125000 -o t.poly.lasieve-1.100125000-100250000 &
../gnfs-lasieve4I16e -v -r t.poly -f 100250000 -c 125000 -o t.poly.lasieve-1.100250000-100375000 &
../gnfs-lasieve4I16e -v -r t.poly -f 100375000 -c 125000 -o t.poly.lasieve-1.100375000-100500000 &
../gnfs-lasieve4I16e -v -r t.poly -f 100500000 -c 125000 -o t.poly.lasieve-1.100500000-100625000 &
../gnfs-lasieve4I16e -v -r t.poly -f 100625000 -c 125000 -o t.poly.lasieve-1.100625000-100750000 &
[/code]The other 250K special-q from my initial 1M block will be run in a similar fashion on a dual-core laptop.


Paul

xilman 2011-02-13 11:00

Up and running on a[code]vendor_id : AuthenticAMD
cpu family : 16
model : 10
model name : AMD Phenom(tm) II X6 1090T Processor
stepping : 0
cpu MHz : 3780.456
cache size : 512 KB[/code]Seem to be getting about 0.78 sec/rel on each processor.

Curiously enough, I'm getting almost the same performance from a 2.13GHz Core2 Duo P7540 laptop running Win7-64. Perhaps these are just small-number statistics or perhaps I need to see what needs tweaking on the AMD Linux box.

Paul

xilman 2011-02-13 18:53

[QUOTE=xilman;252332]Up and running on a[code]vendor_id : AuthenticAMD
cpu family : 16
model : 10
model name : AMD Phenom(tm) II X6 1090T Processor
stepping : 0
cpu MHz : 3780.456
cache size : 512 KB[/code]Seem to be getting about 0.78 sec/rel on each processor.

Curiously enough, I'm getting almost the same performance from a 2.13GHz Core2 Duo P7540 laptop running Win7-64. Perhaps these are just small-number statistics or perhaps I need to see what needs tweaking on the AMD Linux box.

Paul[/QUOTE]After seven hours, which ought to be long enough to get credible numbers, the AMD is averaging 0.771 \pm .005 sec/rel and the Intel 0.813 \pm 0.2 sec/rel.

The ratio of these rates is 1.05 but the ratio of the clock frequencies is 1.77 so the AMD is significantly less efficient here. Perhaps I should check compilation options on the Linux siever.

Paul

xilman 2011-02-13 19:17

[QUOTE=xilman;252332]Up and running on a[code]vendor_id : AuthenticAMD
cpu family : 16
model : 10
model name : AMD Phenom(tm) II X6 1090T Processor
stepping : 0
cpu MHz : 3780.456
cache size : 512 KB[/code]Seem to be getting about 0.78 sec/rel on each processor.

Curiously enough, I'm getting almost the same performance from a 2.13GHz Core2 Duo P7540 laptop running Win7-64. Perhaps these are just small-number statistics or perhaps I need to see what needs tweaking on the AMD Linux box.

Paul[/QUOTE]After seven hours, which ought to be long enough to get credible numbers, the AMD is averaging 0.771 \pm .005 sec/rel and the Intel 0.813 \pm 0.02 sec/rel.

The ratio of these rates is 1.05 but the ratio of the clock frequencies is 1.77 so the AMD is significantly less efficient here. Perhaps I should check compilation options on the Linux siever.

Paul

bsquared 2011-02-13 19:51

[QUOTE=xilman;252368]After seven hours, which ought to be long enough to get credible numbers, the AMD is averaging 0.771 \pm .005 sec/rel and the Intel 0.813 \pm 0.2 sec/rel.

The ratio of these rates is 1.05 but the ratio of the clock frequencies is 1.77 so the AMD is significantly less efficient here. Perhaps I should check compilation options on the Linux siever.

Paul[/QUOTE]

L1_BITS (settable at compile time) may be set to 15, which is optimal for the core2 but not the AMD. I don't know if that would be enough to explain the entire difference though.

Batalov 2011-02-13 20:00

I have posted my own L1_bits=16 binary in the top message - it may be better for AMD. Paul, your binary seems to be a bit slow (maybe non-asm?). Give this one a try. I have 0.30-0.31s/rel on a similar 1090T.

When building from source, use the src/experimental/lasieve4_64/ (well you know that)

xilman 2011-02-13 21:36

[QUOTE=Batalov;252377]I have posted my own L1_bits=16 binary in the top message - it may be better for AMD. Paul, your binary seems to be a bit slow (maybe non-asm?). Give this one a try. I have 0.30-0.31s/rel on a similar 1090T.

When building from source, use the src/experimental/lasieve4_64/ (well you know that)[/QUOTE]Yes, that is markedly better, thank you. Even after a few seconds the rate is around 0.36 s/r and that is still influenced by the set-up time, including the creation of the factorbases

I'll kill off the currently running sievers and continue from where they finished.

(Any chance of you providing comparable builds of gnfs-lasieve4I1[1-5]e please? I'm currently fighting my way through the oft-times depressing difficulties of building anything from the Franke/Kleinjung sources. If it helps, I can provide sftp to my machine and/or ssh access to you for building on this system.)

Many thanks!

Paul

Batalov 2011-02-13 21:48

Will do (as long as the first one runs on your system; the usual showstopper is the glibc compatibility). If you can find Tom's binary in the forum - that one is L1_bits=15.

R.D. Silverman 2011-02-14 14:24

[QUOTE=Batalov;252318]Yes, just allow for time in mail.
Thanks.[/QUOTE]

I have another 5 million relations to send. Let me know when
you want me to send you my data. I am gathering about 5M relations/week.

Batalov 2011-02-14 18:38

It is hard to predict yet, but there's most probably two weeks to go here (or more); so let's get back to this question after one week?

xilman 2011-02-14 18:54

[QUOTE=Batalov;252482]It is hard to predict yet, but there's most probably two weeks to go here (or more); so let's get back to this question after one week?[/QUOTE]If it aids the ETA calculation, something over 1.35M relations have already turned up here in around 1 day effective computation (effective because I changed to a much more efficient siever on the faster machine 21 hours ago despite having started 32 hours ago).

Paul

bsquared 2011-02-14 19:04

I put my first chunk of data in the sftp site: 11386995 relations collected since saturday evening.

I'll be proceeding at a somewhat slower rate of ~ 3Mrels per day.

Batalov 2011-02-14 19:58

Yep, thanks!
Internal redundancy is very low (this is a moot sanity test, but anyway: in this small range, there were only 50k self-redundant rels); and the incremental redundancy, i.e. 78M old + these, was "88402184 unique, 1507519 duplicate", which looks very good. 86.7% relations are 'new'.

R.D. Silverman 2011-02-14 22:10

[QUOTE=Batalov;252491]Yep, thanks!
Internal redundancy is very low (this is a moot sanity test, but anyway: in this small range, there were only 50k self-redundant rels); and the incremental redundancy, i.e. 78M old + these, was "88402184 unique, 1507519 duplicate", which looks very good. 86.7% relations are 'new'.[/QUOTE]

I will send a CD early next week with ~10M more relations.

____________________
[COLOR=green]SB: Very good. If you'd like, you could try a small range with KF - and you will have both types of relations (and burn them, too)![/COLOR]

xilman 2011-02-15 18:37

A proposal
 
This post is really addressed to Bob but it's of general interest (IMAO, anyway) so it is a post and not a PM. The title was going to be "A modest proposal" but I'm serious.

Bob has recently announced that the Cunningham NFS factorizations have finally exceeded the limit of his resources. (They exceeded mine some time back, which is why I've not had much impact recently.)

Bob has also accepted assistance from others when his resources were inadequate at the time. I've run several Lanczos for him, for instance.

Proposal: anyone who is interested helps out Bob with the sieving and/or the LA for his choice of Cunningham factorization. No guarantees and all work is done on a best-efforts basis.

As with the current effort on 2,1870L factorization, I'll throw in some computrons. If, in return, Bob (or anyone else with wimpish machines not up to state of the art sieving) would like to point an ECMNET client at my GCW server I would be (a) grateful and (b) give appropriate acknowledgement. Acceptance of that offer is independent of my main proposal. By "wimpish", most anything built in the current millennium is included.

Paul

R.D. Silverman 2011-02-15 18:45

[QUOTE=xilman;252593]This post is really addressed to Bob but it's of general interest (IMAO, anyway) so it is a post and not a PM. The title was going to be "A modest proposal" but I'm serious.

Bob has recently announced that the Cunningham NFS factorizations have finally exceeded the limit of his resources. (They exceeded mine some time back, which is why I've not had much impact recently.)

Bob has also accepted assistance from others when his resources were inadequate at the time. I've run several Lanczos for him, for instance.

Proposal: anyone who is interested helps out Bob with the sieving and/or the LA for his choice of Cunningham factorization. No guarantees and all work is done on a best-efforts basis.

As with the current effort on 2,1870L factorization, I'll throw in some computrons. If, in return, Bob (or anyone else with wimpish machines not up to state of the art sieving) would like to point an ECMNET client at my GCW server I would be (a) grateful and (b) give appropriate acknowledgement. Acceptance of that offer is independent of my main proposal. By "wimpish", most anything built in the current millennium is included.

Paul[/QUOTE]

I can run independent ECM jobs, but they can not connect to an
external server. I plan on running ECM on the 2LM numbers when
2,1870L finishes.

I may also finish up the homogeneous Cunninghams to exponent 200
via NFS. They are not high priority.

xilman 2011-02-15 19:36

[QUOTE=R.D. Silverman;252595]I can run independent ECM jobs, but they can not connect to an
external server. I plan on running ECM on the 2LM numbers when
2,1870L finishes.

I may also finish up the homogeneous Cunninghams to exponent 200
via NFS. They are not high priority.[/QUOTE]That's fine by me. The suggestion that some ECM work for me as a quid pro quo was made to anyone wishing to help you out, not just to you.

Paul

(Added in edit: a factor of 986*9^986+1 appeared in the last few seconds!)

R.D. Silverman 2011-02-15 19:42

[QUOTE=xilman;252602]That's fine by me. The suggestion that some ECM work for me as a quid pro quo was made to anyone wishing to help you out, not just to you.

Paul

(Added in edit: a factor of 986*9^986+1 appeared in the last few seconds!)[/QUOTE]

Note that Bruce just found a factor of 2,932+ that is 3 digits shorter than
his previous factor (of the same number, natch)!

Andi47 2011-02-15 21:35

[QUOTE=xilman;252602]That's fine by me. The suggestion that some ECM work for me as a quid pro quo was made to anyone wishing to help you out, not just to you.

Paul

(Added in edit: a factor of 986*9^986+1 appeared in the last few seconds!)[/QUOTE]

This factor has not made it to the [URL="http://www.factordb.com/index.php?id=1100000000042008355&scan=3"]factor database[/URL] yet. (I assume that you have [I]not[/I] meant the p13, which is the biggest factor of this number in the DB)

Batalov 2011-02-15 22:10

[QUOTE=xilman;252602]That's fine by me. The suggestion that some ECM work for me as a quid pro quo was made to anyone wishing to help you out, not just to you.
[/QUOTE]
Paul, has anyone run 43000 260e6 curves on Woodall 951?

xilman 2011-02-16 09:52

[QUOTE=Batalov;252615]Paul, has anyone run 43000 260e6 curves on Woodall 951?[/QUOTE]Not as far as I know. The ecmserver.ini record for this number is:
[code]W_951 N 18101159423518357666828255177479109538365804182759476871194099689081082431482170358290624858698812024459501904953261022348573575276969028199796222497643229300730947446598839553529533129630939491374047682419407574822021491459083058737118733981076145112889022659672203022021130838051387342847
W_951 P 1210279207,217567,0,active,nolocalcontrol,recurse
W_951 B 2000 522:0 3:0 1:0
W_951 B 50000 300:0 3:0 1:0
W_951 B 250000 610:0 3:0 1:0
W_951 B 1000000 900:0 3:0 1:0
W_951 B 3000000 2437:0 3:0 1:0
W_951 B 11000000 238:0 3:0 1:0
W_951 B 43000000 0:0 3:0 1:0
W_951 B 110000000 0:0 14:0 1:0[/code] which may underestimate the amount of ECM work done but not by a large amount. It does seriously underestimate the amount of P +/- 1 work --- both have been run to B1=1G with whatever gmp-ecm chooses by default for B2.

At a guess, there are very likely no factors to be found smaller than p40.

Paul

[COLOR=green]Good, I ran about 4000 of these so far. --SB[/COLOR]

xilman 2011-02-16 09:54

[QUOTE=Andi47;252614]This factor has not made it to the [URL="http://www.factordb.com/index.php?id=1100000000042008355&scan=3"]factor database[/URL] yet. (I assume that you have [I]not[/I] meant the p13, which is the biggest factor of this number in the DB)[/QUOTE]I make no effort to add factors to that database so if anything gets in there it's by the activity of others.

Paul

em99010pepe 2011-02-16 10:13

Paul,

What's the ecmnet server address for the GCW?

Best regards,

Carlos

xilman 2011-02-16 11:08

[QUOTE=em99010pepe;252664]Paul,

What's the ecmnet server address for the GCW?

Best regards,

Carlos[/QUOTE]83.217.167.177:8194

If you connect, please do so at most only a few times a day per client. Too many clients connecting too often can really screw my ADSL line. I speak from bitter experience when someone screwed up their ecmclient.cfg and configured dozens of clients to connect every five minutes. :sad:

Thanks for the implied offer of assistance with these numbers.

Paul

em99010pepe 2011-02-16 11:14

Last time I tested the client crashed several times depending on the size of the number. Tomorrow I'll point a few cores to it.

Carlos

xilman 2011-02-16 11:28

Simple c/s for NFS
 
1 Attachment(s)
I've just grabbed another 2M range of special-q and updated the sticky 1st post to match.

This time I'll be using the cabald/cabalc harness to control the sievers on my home LAN. The complete source code is in the attached tarball, as is the directory structure and config files suitable for a 6-core machine running Linux. Nothing restricts it to Linux systems, however, and it has been used on sundry versions of Windows, various Unix-alikes such as DEC OSF/1 and its successors, Sloaris, FreeBSD and MacOS.

The cabald/cabalc structure has served me well for around ten years now, most recently for the RSA-768 project. As it says in the README:[quote]This software may be used for any purpose and the source code may be freely redistributed and re-used in other code, in part or as a whole. There is no warranty whatsoever. If it breaks anything, you get to keep the pieces.[/quote]Paul

R.D. Silverman 2011-02-16 13:00

[QUOTE=R.D. Silverman;252467]I have another 5 million relations to send. Let me know when
you want me to send you my data. I am gathering about 5M relations/week.[/QUOTE]

I am having shoulder surgery on 2/24 to remove some bone spurs
and repair my rotator cuff.

I will stop sieving next Tuesday 2/22 and send all the data collected.
It will have about another 10 million relations.

I will not yet be ready to switch to ECM, so I will set up to do
one of the homogeneous first holes.

bdodson 2011-02-16 15:46

[QUOTE=R.D. Silverman;252603]Note that Bruce just found a factor of 2,932+ that is 3 digits shorter than
his previous factor (of the same number, natch)![/QUOTE]

With Serge's gnfs polyn the c144 factors p60*p85; so that was
p56*p59*p60*p85 on this Most Wanted first hole.

On ECM applied to 2LM's, I count fewer than 20 numbers unreserved
below C200. I'm targeting these with t55 to start (that would be the
second t55, since they're below c233), maybe a 3rd to 3t55. About
half of these are 2LM's, so you might want to start above C199, or else
target p60-p65. Hope the surgery goes well. -Bruce

(Two primes this morning, a Proth and a SophieGermain, the latter just over
2M digits; both top5000.)

axn 2011-02-16 16:30

[QUOTE=bdodson;252691]
(SophieGermain, the latter just over
2[B]M[/B] digits; both top5000.)[/QUOTE]

Considering that the current SG record is just under 80[B]K[/B] digits, I'm assuming there's something amiss in that statement :shock:

EDIT:- You're talking about this: [url]http://primes.utm.edu/primes/page.php?id=98494[/url]?

bdodson 2011-02-16 18:06

[QUOTE=axn;252694]Considering that the current SG record is just under 80[B]K[/B] digits, I'm assuming there's something amiss in that statement :shock:

EDIT:- You're talking about this: [url]http://primes.utm.edu/primes/page.php?id=98494[/url]?[/QUOTE]

Yes, I'm having trouble counting digits. That's 200K? Still too large;
just under 80K is correct. Too large for twin prime either; looks like
this was a "twin prime candidate" --- checking k*2^n -1 for which
the four numbers k*2^n-1, k*2^n+1, k*2^(n-1)-1 and k*2^(n+1)-1
have no small factors. Hmm. A top5000 prime that failed to give a
huge twin _and_ two chances at a SG. Software appears to be
David Underbakke, with page [url]http://www.underbakke.com/primes/[/url].

MMmmph. PrimeGrid ran the quadsieve, with 34M candidates left
to check, with probability of one or more SG 66.7% and prob of one
or more twin 42.3%. Just to be clear, the largest Twin is just over
100K (loc. cit.). So a top5000 prime with 2.5* the number of digits
of the largest SG and twice as many digits as the largest Twin. Called
a SG prime search since the primes found are somewhat more likely to
be SG than they are to be Twins.

Thanks for the clairfication (much needed, clearly) I just switched a few
32-bit machines from Proth searching to "SG" searching, without reading
the fine print. -Bruce

bsquared 2011-02-17 05:05

I'll take 104-110. Should be able to get a good chunk of that done this coming weekend.


_______
[COLOR=green]94-99 lookin' good! Thanks <S>[/COLOR]

Batalov 2011-02-18 10:18

The last two test filterings are getting close to the 'cusp of convergence':

[FONT=Arial Narrow][SIZE=1]Thu Feb 17 00:00:08 2011 begin with 97388326 relations and 112987209 unique ideals[/SIZE][/FONT]
[SIZE=1][FONT=Arial Narrow]Thu Feb 17 00:06:53 2011 reduce to 34234805 relations and 38590875 ideals in 30 passes[/FONT][/SIZE]
[SIZE=1][FONT=Arial Narrow]Thu Feb 17 00:06:53 2011 max relations containing the same ideal: 101[/FONT][/SIZE]
[SIZE=1][FONT=Arial Narrow]...[/FONT][/SIZE]
[FONT=Arial Narrow][SIZE=1]Thu Feb 17 15:19:13 2011 start with 105704669 relations and 116950376 ideals[/SIZE][/FONT]
[SIZE=1][FONT=Arial Narrow]Thu Feb 17 15:21:06 2011 pass 1: found 41677055 singletons[/FONT][/SIZE]
[SIZE=1][FONT=Arial Narrow]Thu Feb 17 15:22:25 2011 pruned dataset has 64027614 relations and 66597695 large ideals[/FONT][/SIZE]
[SIZE=1][FONT=Arial Narrow]Thu Feb 17 15:22:25 2011 reading all ideals from disk[/FONT][/SIZE]
[SIZE=1][FONT=Arial Narrow]Thu Feb 17 15:22:58 2011 memory use: 2492.2 MB[/FONT][/SIZE]
[SIZE=1][FONT=Arial Narrow]Thu Feb 17 15:23:24 2011 keeping 66365826 ideals with weight <= 200, target excess is 347864[/FONT][/SIZE]
[SIZE=1][FONT=Arial Narrow]Thu Feb 17 15:23:49 2011 commencing in-memory singleton removal[/FONT][/SIZE]
[SIZE=1][FONT=Arial Narrow]Thu Feb 17 15:24:13 2011 begin with 64027614 relations and 66365826 unique ideals[/FONT][/SIZE]
[SIZE=1][FONT=Arial Narrow]Thu Feb 17 15:29:47 2011 reduce to 46064462 relations and 47326729 ideals in 19 passes[/FONT][/SIZE]
[SIZE=1][FONT=Arial Narrow]Thu Feb 17 15:29:47 2011 max relations containing the same ideal: 168[/FONT][/SIZE]
[SIZE=1][FONT=Arial Narrow]...[/FONT][/SIZE]
so maybe with Ben[SUP]2[/SUP]'s after-weekend ~20-26M relations we'll converge, so let's target next mid-week for gathering the stones?

__________
[SIZE=1][COLOR=navy]Ecc 3:5: A time to cast away stones, and a time to gather stones together; a time to embrace, and a time to refrain from embracing.[/COLOR][/SIZE]

R.D. Silverman 2011-02-18 11:10

[QUOTE=Batalov;252904]The last two test filterings are getting close to the 'cusp of convergence':

[FONT=Arial Narrow][SIZE=1]Thu Feb 17 00:00:08 2011 begin with 97388326 relations and 112987209 unique ideals
Thu Feb 17 00:06:53 2011 reduce to 34234805 relations and 38590875 ideals in 30 passes
Thu Feb 17 00:06:53 2011 max relations containing the same ideal: 101
...[/SIZE][/FONT]
[FONT=Arial Narrow][SIZE=1]Thu Feb 17 15:19:13 2011 start with 105704669 relations and 116950376 ideals
Thu Feb 17 15:21:06 2011 pass 1: found 41677055 singletons
Thu Feb 17 15:22:25 2011 pruned dataset has 64027614 relations and 66597695 large ideals
Thu Feb 17 15:22:25 2011 reading all ideals from disk
Thu Feb 17 15:22:58 2011 memory use: 2492.2 MB
Thu Feb 17 15:23:24 2011 keeping 66365826 ideals with weight <= 200, target excess is 347864
Thu Feb 17 15:23:49 2011 commencing in-memory singleton removal
Thu Feb 17 15:24:13 2011 begin with 64027614 relations and 66365826 unique ideals
Thu Feb 17 15:29:47 2011 reduce to 46064462 relations and 47326729 ideals in 19 passes
Thu Feb 17 15:29:47 2011 max relations containing the same ideal: 168
...[/SIZE][/FONT]
so maybe with Ben[SUP]2[/SUP]'s after-weekend ~20-26M relations we'll converge, so let's target next mid-week for gathering the stones?

__________
[SIZE=1][COLOR=navy]Ecc 3:5: A time to cast away stones, and a time to gather stones together; a time to embrace, and a time to refrain from embracing.[/COLOR][/SIZE][/QUOTE]

I'll send another ~10M early next week (via snail mail)

xilman 2011-02-18 11:42

Iacta alea est
 
[QUOTE=Batalov;252904]so maybe with Ben[SUP]2[/SUP]'s after-weekend ~20-26M relations we'll converge, so let's target next mid-week for gathering the stones?[/QUOTE]I've just uploaded another batch which concludes 100M-101M. Eight cores are still sieving away, though the two slowest will be stopped some time tomorrow and their results uploaded. (I'm off on a week-long business trip again on Sunday and remote access is not possible to that machine.)

A quick count of the relations so far uploaded reveals a total of 4948958, or a tad under 5M. That's a raw count, of course, and doesn't include duplicates within that set or within the relations found by other workers.

Paul

fivemack 2011-02-18 17:40

Running 110M-112.4M - new 24-thread workstation at work needs a burn-in test

jrk 2011-02-18 17:50

My meager 91-92M range is 82% complete, and I'll upload it on Sunday when it finishes. It will have about 4M relations.

Batalov 2011-02-18 19:36

Sounds good, everyone.
_________

[COLOR=green]Update: Yesterday, the filtering of 111M uniq rels almost converged (went into cliques, but then didn't find enough cycles; that's the pre-cusp); Today, with Tom's and Paul's morning additions, with 116M, we have a horrible cusp 12M matrix (that's with target densitiy 100). By Wednesday, we'll do much better. Probably not a 6M (cf. 1870M), but a 8-9M matrix will make sense.[/COLOR]

Batalov 2011-02-21 02:38

2,1870L [URL="http://mersenneforum.org/showthread.php?p=253218#post253218"]is now[/URL] the fifth hole! :w00t:

jrk 2011-02-21 02:46

[QUOTE=jrk;252935]My meager 91-92M range is 82% complete, and I'll upload it on Sunday when it finishes. It will have about 4M relations.[/QUOTE]
Done.
[url=http://jaysonking.com/files/2,1870L-91M-92M.bz2]2,1870L-91M-92M.bz2[/url] (3979667 relations)

fivemack 2011-02-21 12:09

110-112.4 is done, but upload bandwidth from work is very narrow (32kb/sec) so it won't be fully uploaded for about four hours.

R.D. Silverman 2011-02-21 12:42

[QUOTE=Batalov;252952]Sounds good, everyone.
_________

[COLOR=green]Update: Yesterday, the filtering of 111M uniq rels almost converged (went into cliques, but then didn't find enough cycles; that's the pre-cusp); Today, with Tom's and Paul's morning additions, with 116M, we have a horrible cusp 12M matrix (that's with target densitiy 100). By Wednesday, we'll do much better. Probably not a 6M (cf. 1870M), but a 8-9M matrix will make sense.[/COLOR][/QUOTE]

I will drop another ~10M relations in snail mail to you later today.
You should get them in ~2days.

Our firewall prevents a direct transfer of the data.

fivemack 2011-02-21 12:44

Running 112.4 - 113.0 on the 24-thread machine; should be done tomorrow morning.

bsquared 2011-02-21 14:19

48 nice'd jobs ran over the weekend. What I have now is a hodgepodge of progress: 16 jobs are finished, many others are close, several are lagging quite a bit behind. Since Bob's relations will take a couple days to get there, I'll just let things continue. So far, I have just north of 21M relations.

I'll take 113-114 for tonight.

xilman 2011-02-21 20:10

Another hour to go on the range 102-104M then I'm finished. Uploading the final batch of results shortly ...

Be nice to get back to the GCW numbers full time.


Paul

smh 2011-02-21 20:16

[QUOTE=R.D. Silverman;253272]I will drop another ~10M relations in snail mail to you later today.
You should get them in ~2days.

Our firewall prevents a direct transfer of the data.[/QUOTE]Just wondering, can't you send them from home? Even a slow ADSL would probably take only a couple of hours and would save the trouble of copying the files to CD.

Maybe it's time to store relations in a more efficient format or a separate utility to compress them in a more suitable format then *zip?

Batalov 2011-02-21 20:36

Yes,
You can get the CD home and from home with the [URL="http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html"]PuTTY[/URL] psftp.exe (or on linux, sftp) and you are all set, I'll send you the instructions by PM.
(Especially that US post has a day off today, I believe)
--Serge

jrk 2011-02-21 21:07

[QUOTE=smh;253299]Maybe it's time to store relations in a more efficient format or a separate utility to compress them in a more suitable format then *zip?[/QUOTE]
One could just store the a,b coordinates for each relation, which would compress down to about 8.7 bytes per relation with bzip2. Or, you could squeeze it down further by recording the lattice basis vectors for each specialq,root and then store the i,j lattice coordinates for each relation instead, which I guess would require about half the storage. "Decompressing" would be slow in either case.

This might make it viable to send through email, if that was the only available method of transmission, though 10M relations will still be quite large.

fivemack 2011-02-22 01:42

[QUOTE=jrk;253303]One could just store the a,b coordinates for each relation, which would compress down to about 8.7 bytes per relation with bzip2. Or, you could squeeze it down further by recording the lattice basis vectors for each specialq,root and then store the i,j lattice coordinates for each relation instead, which I guess would require about half the storage.[/QUOTE]

You don't need to record the vectors while you're sieving, you can recover them by rolling lattice-basis reduction ... keep the LLL of the last three x,y pairs, most of the time you'll be able to write the next one in terms of that basis and otherwise you output in clear.

bsquared 2011-02-22 18:35

113-114 is done and is uploading.

Batalov 2011-02-24 08:00

[QUOTE=R.D. Silverman;252673]I am having shoulder surgery on 2/24 to remove some bone spurs
and repair my rotator cuff.[/QUOTE]
We wish you the best of luck with the procedure, Bob, and a fast recovery!

The ETA for this project is late night Friday, don't worry about it.

Batalov 2011-02-26 01:28

The cofactor is a product of a p75 and a p105 and will appear on page 120 as Silverman+mersenneforum snfs. Thanks, everyone!

Batalov 2011-02-26 10:24

1 Attachment(s)
P.S. Here's the log (with a few filtering attempts at different points, and with a finely articulated cusp at 12.0M[SUP]2[/SUP]; the good matrix is 7.6M[SUP]2[/SUP] and a 4[SUP]x[/SUP] times less ETA).

jasonp 2011-02-27 17:04

I must be getting spoiled, this is the first big run I've seen in a long time that had to perform singleton removal from disk files. Apparently the estimate of RAM needed was just past the switchover point (half of total RAM).

Batalov 2011-02-27 21:39

Yeah, it works great (not everyone has 32Gb of RAM :rolleyes:)
Here's a fragment from 3,610+'s log
...
[FONT=Arial Narrow]Tue Jan 4 08:12:25 2011 commencing duplicate removal, pass 1
Tue Jan 4 08:42:56 2011 found 10205937 hash collisions in 216381785 relations
Tue Jan 4 08:43:59 2011 commencing duplicate removal, pass 2
Tue Jan 4 08:47:58 2011 found 21 duplicates and 216381765 unique relations
Tue Jan 4 08:47:58 2011 memory use: 756.8 MB
Tue Jan 4 08:47:58 2011 reading ideals above 720000
Tue Jan 4 08:47:58 2011 commencing singleton removal, initial pass
Tue Jan 4 09:31:56 2011 memory use: 5512.0 MB
Tue Jan 4 09:31:56 2011 removing singletons from LP file
Tue Jan 4 09:31:56 2011 start with 216381765 relations and 182327021 ideals
Tue Jan 4 09:35:35 2011 pass 1: found 44887504 singletons
Tue Jan 4 09:37:07 2011 pass 2: found 8615107 singletons
Tue Jan 4 09:38:34 2011 pass 3: found 1627313 singletons
Tue Jan 4 09:40:02 2011 pass 4: found 290576 singletons
Tue Jan 4 09:42:51 2011 pruned dataset has 160961265 relations and 122666568 large ideals
Tue Jan 4 09:42:51 2011 reading all ideals from disk
Tue Jan 4 09:43:57 2011 memory use: 6481.4 MB
Tue Jan 4 09:44:54 2011 keeping 121915663 ideals with weight <= 200, target excess is 867008
Tue Jan 4 09:45:59 2011 commencing in-memory singleton removal
Tue Jan 4 09:46:46 2011 begin with 160961265 relations and 121915663 unique ideals
Tue Jan 4 09:53:03 2011 reduce to 160900260 relations and 121854650 ideals in 7 passes
...[/FONT]
Great stuff (and results are identical to the ones on a large-memory machine) compared to filtering from ideals with, say, weight <= 40 as it were in the past. Many thanks!


All times are UTC. The time now is 08:04.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.