mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Factoring (https://www.mersenneforum.org/forumdisplay.php?f=19)
-   -   5^421-1 sieving (reservations closed) (https://www.mersenneforum.org/showthread.php?t=10495)

fivemack 2008-07-21 17:30

5^421-1 sieving (reservations closed)
 
The .poly file is

[code]
n: 856747567165509732120757534754047466186338557004916269612235956443230634437193149247989845404816349224130750819510131856088353227722964229252203600736528648205287332264407802189199
skew: 1287906.31
c5: 5767280580
c4: -12525357190853198
c3: -23832471548163085528052
c2: 16917954271985164657592650551
c1: 14831413561315809659995278076458170
c0: -5682335232801110703624206006414844672400
Y1: 46531574816134967669
Y0: -10823715308709192191932199683059669
lpbr: 31
lpba: 32
mfbr: 62
mfba: 64
rlambda: 2.6
alambda: 2.6
rlim: 80000000
alim: 150000000
[/code]

The rational-side yield is much lower than the algebraic-side yield, so I think it makes sense to sieve on the algebraic side only. 31-bit rational primes give something like 40% more relations per Q and need about 20% more relations than 30-bit rational primes would; the R and A factor-base limits are about right.

A million-Q range should take about ten CPU-days to sieve (using the 64-bit ggnfs executable on a Core2/2400) and produce a bit over a million relations; we want about 400 million, so there's lots of reservation space. Use ggnfs-lasieve4I15e; the jobs take about ten minutes to get started, and use 900M of virtual and 400M of physical memory (450M under Windows).

[i](12 Sep 2008)[/i]Please make reservations in the 40M-80M region before carrying on from 328M
[i](24 Sep 2008)[/i] [b][COLOR="Red"]Not taking any more reservations; this lot should suffice![/COLOR][/b]
[b]Reservations[/b]

[code]
bsquared 40M-53M (done 30/09)
batalov 53M-55M (done 27/09)
bsquared 55M-75M (done 25/09)
fivemack 75M-90M (done 26/09)
smh 90M-100M
fivemack 100M-120M (done 14/09)
smh 120M-124M (done 07/09)
fivemack 124M-130M (done 02/09)
smh 130M-136M (done 31/08)
andi47 136M-140M
smh 140M-145M (done 07/08)
frmky 145M-150M (done 02/08)
fivemack 150M-162M (done 02/08)
FactorEyes 162M-170M (done 12/09)
andi47 170.03M-172.27M (done 01/09)
fivemack 172.27M-174M (done 13/09)
Batalov 174M-180M (done 19/09)
bsquared 180M-200M (done 31/07)
bsquared 200M-220M (done 08/08)
fivemack 220M-236M (done 18/08)
bsquared 236M-256M (done 18/08)
fivemack 256M-268M (done 30/08)
bsquared 268M-288M (done 27/08)
bsquared 288M-308M (done 10/09)
bsquared 308M-328M (done 19/09)
bsquared 328M-338M (done 24/09)
fivemack 338M-340M (done 29/09)
[/code]

[b]Ranges that I have[/b]
[code]
40M-92M 100M-136M 140M-340M
[/code]

jasonp 2008-07-21 19:20

[QUOTE=fivemack;138112]
A million-Q range should take about ten CPU-days to sieve and produce a bit over a million relations; we want about 400 million, so there's lots of reservation space[/QUOTE]
With these parameters, your job will be the first test of msieve with 32-bit large primes. The entire library is supposedly '32-bit clean', but I expect postprocessing trouble commensurate with the epic size of this job. Did your 8G machine ever run the lanczos solver reliably?

fivemack 2008-07-21 21:36

I managed to run a small job (29-bit large primes) to completion with four threads on that 8G machine at one point, but I've had no luck getting the machine to misbehave on anything other than msieve, or reliably enough to be diagnosed when running msieve. So I'm contemplating getting another machine; I've got the money in the bank, but I'll wait until the sieving has finished in the hope that Nehalem exists by then.

[there's a 16GB 8-core Penryn machine at work, but it's the fileserver, and running jobs that use most of it for months will not make me popular]

Andi47 2008-07-22 04:33

Nehalem
 
[QUOTE=fivemack;138129]So I'm contemplating getting another machine; I've got the money in the bank, but I'll wait until the sieving has finished in the hope that Nehalem exists by then.

[there's a 16GB 8-core Penryn machine at work, but it's the fileserver, and running jobs that use most of it for months will not make me popular][/QUOTE]

Hmmm... The Nehalem - Lynnfield processor has been postponed from Q1 2009 to Q3 2009 since I [URL="http://en.wikipedia.org/wiki/Nehalem_%28microarchitecture%29"]looked[/URL] the last time (a few months ago). The Bloomfield processor (more powerful but it consumes more energy - 130 W) is due at Oct. 2008.

Edit: For a longer-term test (a few days up to a few weeks) on your big machine you can possibly double-check M100,000,007 using prime95 version 25.6 (multithreaded). There are several interim residues posted in [URL="http://www.mersenneforum.org/showthread.php?t=2224&highlight=100000007&page=3"]this thread[/URL], beginning at ~iteration #12M, and further every few million iterations.

Andi47 2008-07-22 09:38

[QUOTE=fivemack;138112]Use gnfs-lasieve4I15e; the jobs take about ten minutes to get started, and use 900M of virtual and 400M of physical memory.
[/QUOTE]

in windows it takes ~450MB of physical memory.

FactorEyes 2008-07-23 00:00

How long for the seiving on this one?
 
If we assume that 11 volunteer sievers step up with an average of 36 million Q values apiece, that's 1 core-year (=360 core-days) per volunteer, to get the sieving done. I'm assuming all have the 64-bit optimized 4I5e running, which is optimistic. At 4 cores per volunteer, that's 3 months' running time.

Obviously there are the odd free cycles from clusters, which will likely do 1/3 of the work, so we're down to, say, 2 solid months from each volunteer with 4 cores available.

Over the next 45 days I can easily finish my commitment of 8 million Q values, and I'll toss in more if they're needed, but I doubt I could sieve more than 20 million Q in six months. Give me a year, and I could toss in at least 30 million.

It seems as if 4-5 months of sieving is the bare minimum here, unless there is substantial enthusiasm for this project. I think it's worth doing, especially since the linear algebra will push things a bit.

I'm trying to get an idea of how to prioritize my current stack of fun integers. Do my ideas about the timeline sound about right?

fivemack 2008-07-23 07:38

[QUOTE=FactorEyes;138192]If we assume that 11 volunteer sievers step up with an average of 36 million Q values apiece, that's 1 core-year (=360 core-days) per volunteer, to get the sieving done. I'm assuming all have the 64-bit optimized 4I5e running, which is optimistic. At 4 cores per volunteer, that's 3 months' running time.

Obviously there are the odd free cycles from clusters, which will likely do 1/3 of the work, so we're down to, say, 2 solid months from each volunteer with 4 cores available.[/QUOTE]

That's much the same timescale that I was anticipating, though I suspect the average cores per volunteer is probably a bit over four given that there are people who sieve using idle cycles at work. I wouldn't recommend anyone making a reservation now that takes more than about six weeks to finish; I've spoken for a million per available core and will get another million per core as and when those finish, I think that's about the right strategy though obviously it's worth reserving more if you know you want the machine to run unattended over a summer vacation.

[quote]It seems as if 4-5 months of sieving is the bare minimum here, unless there is substantial enthusiasm for this project. I think it's worth doing, especially since the linear algebra will push things a bit.[/quote]

There seems to be reasonable enthusiasm - bsquared has a lot of compute resources, and it's only been 36 hours since I put the message up - though the people with the enormous clusters are committed to a long list of large SNFS jobs.

Andi47 2008-07-23 10:10

[QUOTE=fivemack;138211]...obviously it's worth reserving more if you know you want the machine to run unattended over a summer vacation.
[/QUOTE]

That's exactly what I do: I will be on summer vacation the whole august, and so I reserved a range which should be finished near the end of august / beginning of september. (I hope it's ok when I don't finish before the 1st week of september)

Shortly before I leave for vacation I will reserve a second range (for the second thread of the core2duo). I can't start this right now, because it would take ~90-92% of the available memory and thus will make it nearly impossible to work at this computer.

bsquared 2008-07-23 19:41

To be fair, I personally have very small compute resources; one lowly Athlon 4200+ X2 to be precise. The compute resources I have *access* to are quite a bit more substantial. Which reminds me that I'd like to make sure Sam Wagstaff is informed of that fact so the various universities can be acknowledged in the cunningham book. Hopefully that's possible.

I'm also grateful to Tom for organizing efforts like these, otherwise said resouces would be woefully underutilized. I can manage to launch a few scripts to aid projects which have already been skillfully researched and organized but a 3 year old and a 5 month old tend to otherwise occupy any time not spent working or sleeping :)

Looking forward to seeing this one through!

- ben.

Batalov 2008-07-26 04:43

[quote=fivemack;138112]A million-Q range should take about ten CPU-days to sieve (using the 64-bit ggnfs executable on a Core2/2400) and produce a bit over a million relations[/quote]
I get about 1.36M rels from my interval (I guess you are counting the future non-redundant rels).
The file will be about 150Mb in size, and after "bzip2 -9", [B]72Mb[/B].
Please remind where we should upload them and what's the secret city this time. :whistle:

Serge

fivemack 2008-07-26 20:45

[code]
ftp chiark.greenend.org.uk
user: anonymous pass: your email address
cd special/twomack-relations/malmo
put 150M-151M.bz2
[/code]

I have moved across the Oresund from Denmark to Sweden for my secret city this time.

smh 2008-07-26 22:51

Does someone have a 64 bit windows executable of gnfs-lasieve4I15e available?

Batalov 2008-07-30 02:14

[SIZE=2]"You Always Burn the First Pancake"
Catch it or duck! :smile:

(which is to say that 175M-176M.bz2 is planted in Malmo.)
[/SIZE]

bsquared 2008-07-30 15:00

[quote=FactorEyes;138578]Heh. The bottleneck for you may be getting them uploaded, which may take more time than computing them.[/quote]

:smile:

A good problem to have, IMO.

Right now the cluster is kinda busy, so I'm only using about 20% of the available CPUs (it will never be 100%, but hopefully 60-80% for some period of time).

p.s. Even this appears to be small potatoes compared to what certain other cluster wielders can bring.

R.D. Silverman 2008-07-30 15:32

[QUOTE=FactorEyes;138578]Heh. The bottleneck for you may be getting them uploaded, which may take more time than computing them.[/QUOTE]

There is an old quote:

Don't underestimate the bandwidth of a Vokeswagon bus filled with
mag tapes.........

fivemack 2008-07-30 15:41

The postal service is about 1Mbit/second if you stick six DVDs in the package ...

bsquared 2008-07-30 15:58

A VW bus packed full of dual-layer DVD's would be about 2.4GB/s, from where I live to Cambridge, England, neglecting time to load/retrieve data to/from the disks, and assuming I haven't botched something in the estimate.

Although I'm not sure how I'd negotiate the Atlantic Ocean in said VW bus...

Also not sure if I'd be able to generate the 691000 GB of data to fill the disks required to achieve that number. At least with meaningfull information :smile:

xilman 2008-07-30 16:19

[QUOTE=R.D. Silverman;138581]There is an old quote:

Don't underestimate the bandwidth of a Vokeswagon bus filled with
mag tapes.........[/QUOTE]The version I heard had "panel truck". The fact that we both heard it with "mag tapes" shows how old we are.

When transferring relations, matrices, etc, between home and work I use a 4G memory stick. It takes perhaps 2x30 minutes to load/unload the stick and another 30 minutes in transit on my motorbike. The peak bandwidth is thus 4e9/90/60 bytes/sec, or 740kB/s. My ADSL link is 60kB/s in one direction and 30kB/s in the the other. Sneakernet is thus at least 12 times as fast, peaking at 25 times, than using the interweb thingy. Latency is, admittedly, rather poor.

Unfortunately, the lab where I've been using idle cycles on 20 dual-core machines has been shut down for 6-9 weeks while the entire room is rebuilt. My processing resource has taken a substantial hit until September.

Paul

smh 2008-07-30 21:00

[QUOTE=xilman;138586]The peak bandwidth is thus 4e9/90/60 bytes/sec, or 740kB/s. [/QUOTE] Over a short distance. Bandwith would be much lower on longer distances.[QUOTE=xilman;138586]My ADSL link is 60kB/s in one direction and 30kB/s in the the other. Sneakernet is thus at least 12 times as fast, peaking at 25 times, than using the interweb thingy.[/QUOTE]That is something like 512 Kbps / 256 Kbps ADSL? I just did a quick check and i can't even find anything slower than 1500/256 Kbps over here. 20/1 Mbps is quite common and i can't imagine it's much different in the UK.

xilman 2008-07-31 04:25

[QUOTE=smh;138602]Over a short distance. Bandwith would be much lower on longer distances.That is something like 512 Kbps / 256 Kbps ADSL? I just did a quick check and i can't even find anything slower than 1500/256 Kbps over here. 20/1 Mbps is quite common and i can't imagine it's much different in the UK.[/QUOTE]Fair enough, but remember that I can ship 1TB (256 sticks) for the same transport cost as 4GB.

Yes 512/256 ADSL. I live sufficiently far from the exchange that that's all British Telecom will provide to my ISP. Cable doesn't go past my house. :sad:

However, ADSL2 is coming "soon" (according to BT) and there's a good chance that I'll be able to get several times the present bandwidth. I'm not sure it will reach sneakernet with a single stick though. As long as I want to ship several gigabytes badly enough, that is.

fivemack 2008-07-31 09:10

[QUOTE=Batalov;138545][SIZE=2]"You Always Burn the First Pancake"
Catch it or duck! :smile:

(which is to say that 175M-176M.bz2 is planted in Malmo.)
[/SIZE][/QUOTE]

Got it, unburnt.

I'm fleeing the country on Saturday, wandering round Scandinavia for a couple of weeks and returning on August 16th, so will let stuff pile up on the server until then.

fivemack 2008-08-16 21:27

I'm back from Scandinavia. The electricians seem to have replaced the outside light in my house without turning the main power off, so the 12M range that I left running on two Q6600s has finished; there's 8M on machines at work which may or may not have finished.

I've updated the list of what I've transferred over; I'm running msieve tonight to see what the figures look like. I ran it a fortnight ago after collecting the first batch of bsquared's relations and got

[code]
Sat Aug 2 00:23:02 2008 restarting with 43137861 relations
Sat Aug 2 00:29:25 2008 found 3827072 hash collisions in 42684435 relations
Sat Aug 2 00:29:52 2008 found 1057916 duplicates and 41626519 unique relations
Sat Aug 2 00:30:02 2008 filtering rational ideals above 20119552
Sat Aug 2 00:30:02 2008 filtering algebraic ideals above 20119552
Sat Aug 2 00:36:40 2008 41626519 relations and about 68058770 large ideals
[/code]

which is a very promisingly low duplication rate.

[code]
Sat Aug 16 22:40:00 2008 restarting with 100793240 relations
Sat Aug 16 22:56:58 2008 found 16716495 hash collisions in 97700659 relations
Sat Aug 16 23:00:28 2008 found 5228460 duplicates and 92472199 unique relations
Sat Aug 16 23:01:39 2008 filtering rational ideals above 156041216
Sat Aug 16 23:01:39 2008 filtering algebraic ideals above 156041216
Sat Aug 16 23:01:39 2008 need 26290614 more relations than ideals
Sat Aug 16 23:16:13 2008 92472199 relations and about 86851254 large ideals
[/code]

I think we're about a third of the way there, and that's with the data from 76 million-Q ranges collated. Given duplication, I still think we probably need to sieve 300 million Q-ranges in total - call it three more months with the present rate of application of compute power.

This computation represents approximately one millionth of one percent of the UK's carbon footprint - eight years at 100 watts per core is 7 megawatt-hours, a megawatt-hour of power from coal is a ton of CO2, and the UK's carbon footprint is 550 megatons per year. If you want a more impressive figure, seven megawatt-hours takes a fourteen-thousand-ton passenger ferry half way from Puttgarden in Germany to Rodbyhavn in Denmark (it's nineteen kilometres; the MOU for the building of the bridge is signed and the bridge should open in 2019)

jasonp 2008-08-17 03:26

You guys are really cruising. Tom, you can make msieve's duplicate removal a lot more efficient in the presence of huge numbers of relations by changing LOG2_DUP_HASHTABLE1_SIZE in gnfs/filter/duplicate.c to something like 30 or 31; that should reduce the memory use at least. You can also try incrementing LOG2_DUP_HASHTABLE2_SIZE, though this will make the hashtables significantly larger.

I should be releasing v1.37 in the next week or two, and this will have a few filtering improvements

fivemack 2008-08-18 09:20

We've just reached the hundred-million-Q point in relations uploaded, in under a month since sieving started; so we've been averaging about forty cores contributing to the project.

I think we have two months to go. Thanks to everyone who's contributing cycles!

Batalov 2008-08-26 21:01

I've put 176M and 177M. 178-179M, 179M-180M are in progress.

[COLOR=gray]There are [I]a bit[/I] smaller than 174M and 175M. I have now realized that I had inadvertently run the 14e siever on [I]a few[/I] small sub-chunks. But I will use consistently 15e for my other (and future) chunks. Understandably, there's nothing wrong with the 14e results, just fewer.[/COLOR]

P.S. I have to confess that I've slowed down because I tried my muscle on two other numbers, 7,384+ and 2-1586L/gnfs. Will catch up. Promise.

-Serge

fivemack 2008-08-28 08:03

[code]
Thu Aug 28 01:42:30 2008 restarting with 174092090 relations
Thu Aug 28 02:38:00 2008 found 15396903 duplicates and 158695148 unique relations
Thu Aug 28 02:39:57 2008 filtering rational ideals above 202571776
Thu Aug 28 02:39:57 2008 filtering algebraic ideals above 202571776
Thu Aug 28 03:14:52 2008 158695148 relations and about 99916465 large ideals
Thu Aug 28 07:10:46 2008 reduce to 28551117 relations and 20303495 ideals in 25 passes
[/code]

Given duplicates, we're just under half-way there.

bsquared 2008-08-28 13:40

The cluster is down for a bit, so it might be longer before I can finish the range I just reserved. Although it's down so that they can add 64+ more cores, and more memory, so it's a good down time :-)

jasonp 2008-08-28 17:21

I can do line sieving if nobody is working on it already. What line size do you think is approriate for such a large job?

fivemack 2008-08-28 22:54

I haven't done any line sieving myself on this one, so I've no idea what size is appropriate; 50% of the relations in one file I chose at random have |x| < 10^11, but that's probably far too long a line to be of use. Just to pull numbers from the top of my head, (2*10^10) x 10^4 would be a nice region to look at (about 6% of the relations have |x|<10^10, and the skewness is around 10^6), but I would sieve b=3456 for line lengths 2^31 through 2^37 and see how the yield/time curve looks. I suppose I'd target a CPU-month for line sieving, but I don't know how you are for CPUs.

jasonp 2008-08-29 15:07

Between line sizes of 2G, 4G and 8G, the 4G had the fastest time per relation found (by around 20%). With that line size a 2GHz opteron needs approximately 30 minutes per line, so 10k lines is a fairly big chunk of runtime for me. I'll put 3 CPUs on it and see how far I get in a week. Expect something under a million relations per thousand lines.

fivemack 2008-08-29 19:55

Given the skewness I think it makes sense to do 2k lines rather than 10k, though that's still an unreasonable amount of runtime.

I don't know if there's a sense in which these small-X,Y relations are better than the lattice sieving ones; unless there is, I don't think it's worth more than a few CPU-weeks.

Thanks for the data!

jasonp 2008-08-29 21:04

Well, if 2500 lines of 1/8 the size took 3 CPU-days for 6,383+, then the current effort should be just a scale-up of that. A 2.6GHz core2duo system averages about 20 minutes per line (actually 26 minutes for odd lines and 14 minutes for even lines)

Wacky 2008-08-29 21:56

[QUOTE=fivemack;140333]I don't know if there's a sense in which these small-X,Y relations are better than the lattice sieving ones; unless there is, I don't think it's worth more than a few CPU-weeks.[/QUOTE]

My experience is that not only is the line siever significantly slower than the lattice siever, but given a large number of lattice relations, a disproportionately large number of them duplicate relations already found.

I attribute this to the fact that there are very few relations which are truly smooth with respect to a limit below the special-q.

In particular, for many of the line-sieving relations, one of their "large primes" matches a q value that was used as a special-q in the lattice sieving.

Andi47 2008-09-01 06:32

I just came back to office from holiday and found that my ranges are running much slower than expected.

My range 136-140M is currently done up to q=137377993 - I will keep this one running.

170-174M is done up to q=171450113. Hence running both ranges in parallel uses up approx. 95% of the memory and extremely slows down other applications (like word and excel), I stopped this run and will unreserve the rest of this range. I will upload the relations for 170M to 171.145M today or tomorrow in the evening.

bsquared 2008-09-02 16:16

This past weekend on vacation I found myself driving through Malmo, MN, and couldn't help thinking about this project...

[URL]http://maps.google.com/maps?f=q&hl=en&geocode=&q=malmo,+mn&ie=UTF8&ll=46.331284,-93.518372&spn=0.500195,0.515671&z=11&iwloc=addr[/URL]

Batalov 2008-09-03 06:52

[quote=bsquared;140650]This past weekend on vacation I found myself driving through Malmo, MN, and couldn't help thinking about this project...
[/quote]
...that got me thinking, too. And I realized that in my life I've seen Malmo, apparently... from Kopenhagen, even if barely. And probably better than barely from the plane. And the tunnel-bridge that fooled people in some other thread - by [I]not[/I] being the answer to the riddle.

jasonp 2008-09-05 15:32

540k relations from line sieving are uploading now; I'll be able to submit another ~800k tonight or tomorrow.

The line sieving covered |a| <= 4G and b in [1,1200], and required about 20 CPU-days.

Tom, if you still have the scripts that analyzed line sieving relations from 6,383+, could you give a hint how many from this dataset have three large primes, and how many are likely not to be duplicated by the lattice sieving?

I can spare some more CPU time to go farther, but not much farther.

fivemack 2008-09-05 19:51

[code]
open A,"< msieve_line_0501_0999.dat";
$novel = 0;
while (<A>)
{
($ab,$l,$r) = split ":",$_;
@left = map hex,(split ",",$l);
@right = map hex,(split ",",$r);
$lbig = 0; $rbig = 0; $interesting = 1;
for $u (@left)
{
if ($u > 80000000) {$lbig++;}
}
for $u (@right)
{
if ($u > 150000000) {$rbig++;}
if ($u > 50000000 && $u < 350000000) {$interesting = 0;}
}
$ct{$lbig.",",$rbig}++;
$novel += $interesting;
}

print $novel," would not appear in latsieve\n";

for $j (keys %ct)
{
print $j," ",$ct{$j},"\n";
}
[/code]

and the results are: 126013 of the 541038 relations would not have been picked up in lattice sieving (IE have no algebraic-side prime between 50M and 350M)

and as for the mix

[code]
{rational large primes, algebraic large primes, count}
0,0 7520
0,1 31938
0,2 38522
0,3 18262

1,0 20250
1,1 85531
1,2 101885
1,3 48298

2,0 13527
2,1 56769
2,2 67520
2,3 32277

3,0 1369
3,1 6007
3,2 7528
3,3 3835
[/code]

I'll update this post when I've got 1-1200. I think we'll be getting about 400k known-non-duplicate relations in a time in which lattice sieving would have got 2.8M raw relations of which perhaps 60% would end up being duplicates.

I'll do another filtering pass at the end of next week, by which time 100-120 and 170-174 ought to be finished: bsquared, should I also wait until your 288-308 range is finished or will that not be until well after the 13th? Expecting ~250M raw relations by then, probably an unusable matrix at the end of September and a not-irredeemable one in mid-October.

The yield at Q=100M (about 1.4Mr/Mq) is very close to the yield at Q=150M, and significantly better than the 1.22Mr/Mq yield at Q=230M. I'm not sure when the diminishing returns from small algebraic factor bases will start kicking in. Should probably run 10k at {40,50,60,70,80}M on a spare core, I'll do that on Monday.

bsquared 2008-09-05 20:39

[quote=fivemack;141002]
bsquared, should I also wait until your 288-308 range is finished or will that not be until well after the 13th? [/quote]

It should be done 110 hrs from now. Uploaded say by next Wednesday evening (the 10th), your time.

fivemack 2008-09-05 22:32

Wow, that's an impressive pace; I guess the cluster upgrade is complete.

bsquared 2008-09-05 23:02

[quote=fivemack;141032]Wow, that's an impressive pace; I guess the cluster upgrade is complete.[/quote]

It is, but I'm not actually making use of the extra HP. The other 20M ranges I've reserved were tackled with 36 1.4GHz Opteron cores. I'm now using 40 (out of 250 some, most of the rest of which are faster 2 Ghz Opterons). The rest of the cores are queued to obilvion by other people, so I'm thankful to get access to something, at least. And the range would have been ready tomorrow, but for my weekend vacation when I had to leave things idle (but got to see Malmo!).

- ben.

fivemack 2008-09-05 23:44

[QUOTE=Batalov;140735]...that got me thinking, too. And I realized that in my life I've seen Malmo, apparently... from Kopenhagen, even if barely. And probably better than barely from the plane. And the tunnel-bridge that fooled people in some other thread - by [I]not[/I] being the answer to the riddle.[/QUOTE]

I managed to miss both Malmo and the tunnel-bridge; I got into the train in Copenhagen, after a long day visiting museums most of which seemed to be shut, closed my eyes briefly and woke up half an hour beyond Malmo.

I don't recommend long train trips in northern Europe for the scenery; in the Balkans, yes, there are nice mountains all over the place, with bridges, rivers, tunnels and all other manner of scenery, but in the aggregate of about twenty hours from Brussels to Stockholm, the part that happened during daylight could have been an endless repeat of the trip from Cambridge to Peterborough. Flat. Fields. The occasional prosperous-looking market town. More flat. More fields. Sometimes, a cow.

jasonp 2008-09-06 04:46

[QUOTE=fivemack;141002]
I'll update this post when I've got 1-1200. I think we'll be getting about 400k known-non-duplicate relations in a time in which lattice sieving would have got 2.8M raw relations of which perhaps 60% would end up being duplicates.
[/QUOTE]
Upload of 940k additional relations just completed. Also, note that relations with three rational large primes would also be missed by the lattice siever (that's a small effect, though).

By your next filtering run I should be able to get you another 400k relations or so.

fivemack 2008-09-06 10:45

Stats from linesieving 1-1200:

1485351 relations. 353914 wouldn't appear in latsieve. The distribution's pretty similar:

[code]
0,0 21949
0,1 90519
0,2 106559
0,3 49642

1,0 57967
1,1 238894
1,2 279102
1,3 130241

2,0 37857
2,1 155946
2,2 182481
2,3 84804

3,0 3633
3,1 15716
3,2 19935
3,3 10106
[/code]

jasonp 2008-09-06 15:02

Thanks. The portion of the dataset with three large primes is smaller than I expected, possibly because the SQUFOF code fails more often when inputs are 61 or 62 bits in size, and also because the large polynomial skew means the average polynomial size is very small unless b is larger, so that there is less opportunity to switch to 3LP. I would think the 3LP yield would go way up if we were only line sieving, but I'm happy to contribute 0.5% of the total relations :)

bsquared 2008-09-10 18:24

288-308 is done, except for a few 100kQ near the beginning of the range. Upload will commence soon... I'll get the stragglers in later - probably a couple more days.

judging strictly by filesize, this range has about 4.5% fewer relations than the last 20M range (and the missing few 100kQ accounts for ~2% of this reduction, so ~2.5% less). Doesn't seem like we're running too strongly into diminishing returns yet, but this is based only on filesize, for now.

reserving 308 - 328. Unless you think I should start to concentrate on < 90M ?

fivemack 2008-09-10 23:26

I've not yet run the yield experiment that I wanted to do on the smaller Q-values, I'll do it over Thursday night - I'm rushing to get 100-120 done by the weekend. So probably sensible to reserve 308-328 at the moment.

fivemack 2008-09-12 10:26

OK, I've done the yield experiment

(algebraic side, alim=Q0, sieve from Q0 to Q0+10k on K8/2200)

40M 13476 0.57880s/r
50M 12888 0.56763s/r
60M 13466 0.57731s/r
70M 13473 0.56660s/r
80M 12310 0.55891s/r

so it looks as if the yield is not dropping off catastrophically even at 40M. My model for duplications isn't working very well, but it looks as if the rate isn't going to be cataclysmic.

fivemack 2008-09-14 09:54

1 Attachment(s)
We have just hit the 200MQ mark, so I did a filtering run today. Log attached.

Unexpectedly good news: we have a matrix.

The less-good news: it is a matrix of unspeakable vastness, another hundred million Q would be very useful to make it into something slightly more practical.

[code]
weight of 22989546 cycles is about 1609444441 (70.01/cycle)
[/code]

I am leaving the machine running overnight to get the actual matrix size - I hope overnight is long enough, it is swapping quite significantly, msieve.dat.mat is only growing at about 20kb/second, and top tells me

[code]
Mem: 8194260k total, 8148544k used, 45716k free, 3932k buffers
Swap: 15623172k total, 8418716k used, 7204456k free, 903288k cached
...
24844 fivemack 20 0 10.3g 4.9g 568 D 1 62.3 585:19.41 msieve
[/code]

jasonp 2008-09-15 15:34

[QUOTE=fivemack;142368]
Unexpectedly good news: we have a matrix.

The less-good news: it is a matrix of unspeakable vastness, another hundred million Q would be very useful to make it into something slightly more practical.
[/QUOTE]
Agreed, but the matrix you have is not incredibly bad. Greg successfully solved a matrix only a little smaller than the current one. Still, if the machine is swapping just trying to build the matrix, the odds for an in-memory matrix version under 8GB don't look good.

bdodson 2008-09-15 17:22

[QUOTE=fivemack;142368] ...
The less-good news: it is a matrix of unspeakable vastness, another hundred million Q would be very useful to make it into something slightly more practical.

[code]
weight of 22989546 cycles is about 1609444441 (70.01/cycle)
[/code] ...[/QUOTE]

This sounds familiar. From my 200M nonduplicate relns for 5,389+ I got

[code]
weight of 21054112 cycles is about 1474032635 (70.01/cycle) [/code]

The matrix was showing 7.2Gb in "top", from a msieve report of
"memory use: 6466.7 MB" with

[code]
saving the first 48 matrix rows for later
matrix is 21013127 x 21013375 (6088.4 MB) with weight 1512170490
(71.96/col)
sparse part has weight 1385905099 (65.95/col) [/code]

I forget if this is quite as bad as the matrix Richard did for Greg's
number (12,241- iirc), but it is surely more close to that than to
my objective of staying below the difficulty of the 7,311+.

Speaking of which, the 7,311+ matrix finished right on schedule
yesterday. I've had three sqrt misses so far, which shouldn't be
anything to be worried about.

I am worried, though. This is the first number past the Franke/ggnfs
bound at 255 digits. A bound that is also used in the release version
of msieve. I don't feel nervous about the dependencies (presuming
we know our linear algebra); but the sqrt is more definitely out in
algebraic numbers. Has anyone checked to see whether there's
anything that's tricky in adding some chars to get to 258-digits?

The start of my three dependices looks OK:

[code]
reading relations for dependency 1
read 8592781 cycles
cycles contain 28502626 unique relations
read 28502626 relations
multiplying 40463902 relations
multiply complete, coefficients have about 1069.65 million bits
--

reading relations for dependency 2
read 8588925 cycles
cycles contain 28488491 unique relations
read 28488491 relations
multiplying 40440670 relations
multiply complete, coefficients have about 1069.03 million bits
--

reading relations for dependency 3
read 8589852 cycles
cycles contain 28497102 unique relations
read 28497102 relations
multiplying 40458880 relations
multiply complete, coefficients have about 1069.52 million bits [/code]

These look plausible; more cycles, more relns, larger coef. What
looks unusual is the modulus:

[code]
initial square root is modulo 62851
--
initial square root is modulo 62473 [and]
--
initial square root is modulo 62761 [/code]

Those are way smaller than the modulus for any of
my other numbers. For 3,536 c252 there was

[code]
modulo 174873547
modulo 174444847
modulo 175552543 [/code]

For 3,547 c242; "modulo 629128447" and for 10,257-

[code]
modulo 20904019
modulo 20629429
modulo 20847061
modulo 20797981 [and]
-----------------
modulo 14003203
modulo 14019199
modulo 14026189 [/code]

for 10,257+. Has anyone seen such a small modulus (five digits)
for another hard numbers? Perhaps a place in the sqrt code where
there might be an overflow, for adding extra digits to the sizes of the
intended numbers?

Of course, if one of the next two or three dependencies runs
OK with a five digit modulus (by which I mean "runs OK _and_
finds a factor!") I won't be nervous any more. Just checking.

-Bruce (doing some more sieving on 5,389+, 600M-800M to start;
looking for maybe another 40M unique?)

PS - no; the matrix I'm thinking of was

[code]
matrix is 18720804 x 18721052 (5170.3 MB) with weight 1285924286 (68.69/col)
Sat Mar 22 03:40:43 2008 sparse part has weight 1168145840 (62.40/col) [/code]

My recollection (surely flawed, but anyway ...) is that this 12,241- matrix is
still the hardest among NFSNET and Childers/Dodson numbers. So 21.0M^2
is way out of bounds.

R.D. Silverman 2008-09-15 17:45

[QUOTE=bdodson;142569]

Speaking of which, the 7,311+ matrix finished right on schedule
yesterday. I've had three sqrt misses so far, which shouldn't be
anything to be worried about.

I am worried, though. [/QUOTE]

Perhaps you might consider switching to the CWI sqrt code?
I suspect that if your sqrt really does fail, converting the relations
(and dependency info) to CWI format would be less work that
re-running the matrix. I understand that msieve-->cwi format code
already exists.

????

R.D. Silverman 2008-09-15 17:48

[QUOTE=bdodson;142569]


<snip>
Of course, if one of the next two or three dependencies runs
OK with a five digit modulus (by which I mean "runs OK _and_
finds a factor!") I won't be nervous any more. Just checking.

[/QUOTE]

Are you getting just trivial dependencies, or is it the case that
x^2 != y^2 mod N? If the latter, I suspect that you are in
trouble.

Do you have any code that adds up the exponents in the dependencies
and confirms that they sum to 0 mod 2?

jasonp 2008-09-15 18:30

Bruce: thanks for the figures on the big matrices. The small prime used in the square root is calculated to produce a modulus that, after being squared a bunch of times, is a few thousand bits larger than the expected size of the largest coefficient of the square root. My understanding of the math is that any starting prime, no matter how small, will allow the Newton iteration to converge as long as the square root is not a multiple root. In fact, for number fields that are irreducible modulo all primes, the code picks a starting prime a little larger than ~50, finds the square root mod that prime by enumeration, and performs the Newton iteration as if nothing was different. That produces convergence about 2/3 of the time.

Also, the size of multiple-precision numbers is limited to 27*32 bits, which adds a few guard digits above 255 digits

Bob: there is a [i]lot[/i] of checking for dependencies, separate from checking the dependency vectors (which does happen):
- powers of primes and all algebraic ideals are verified to be even
- the number of total relations and free relations is verified to be even
- when the relation product is computed, the code verifies that the product coefficients mod (a 31-bit prime for which alg. poly is irreducible) equal the result of all relations reduced mod the same prime then multiplied together
- the computed square roots are distinct, and equal the same value mod N when squared

Any of these not working causes the dependency to be abandoned, so that if there's no error message then I'm prepared to believe the dependency worked fine (or failed in a way that would still pass all those tests).

This could also be a case where the number has three factors, and unfortunately in this case msieve will not display anything until all three are found.

PS: Using the CWI square root code requires all 1-word integers not exceed 2^30, and the current dataset might not allow that

Batalov 2008-09-15 19:02

For what it's worth, I've seen "small" moduli before (for 7,384+); had the same feeling; everything finished fine. Of course it was a much smaller matrix ~5M.

fivemack 2008-09-15 23:58

[QUOTE=jasonp;142548]Agreed, but the matrix you have is not incredibly bad. Greg successfully solved a matrix only a little smaller than the current one. Still, if the machine is swapping just trying to build the matrix, the odds for an in-memory matrix version under 8GB don't look good.[/QUOTE]

Building the matrix has been in my experience a little more memory-intensive than running it; certainly I have had a matrix which could run happily on a 4G machine without swap but could only be built on an 8G one.

Matrix time is very roughly proportional to the square of the size, and the 18.7M matrix took 50 days on a computer which is I think comparable to what I have access to, so a 21.5M matrix will take about 65 days; so there's no hope of getting it done in time for CADO, and it would seem sensible to see how much another four weeks of sieving gains us in the hope of nonetheless getting the computation finished by Christmas. I'll need to get a better motherboard for the Phenom box.

Batalov 2008-09-16 00:20

GGNFS 1.0-beta news
 
I have just read the most amazing announcement from Chris Monico (in ggnfs yahoo tech group). Among other things in his upcoming massive rewrite of GGNFS are plans for ...
[quote](4) A multi-threaded block Wiedemann implementation for multicore chips (I've had a multi-threaded Lanczos implementation lying around for a while, but I think I lost interest before hammering out all the kinks. Anyway, I think this should be a little better).[/quote]

I've searched this forum for a similar announcement, and having found none - I wanted to share the news. For a complete message see [URL]http://tech.groups.yahoo.com/group/ggnfs/message/2260[/URL]

As a guest at the near-repunit M.Kamada's project, I couldn't help but notice that the announcement is much more than some vague plans. Chris did a [URL="http://homepage2.nifty.com/m_kamada/math/53333.htm#169"]test factorization[/URL] with GGNFS [I]version 0.91.4[/I], no less. (it was puzzling, and then I saw the announcement)

Isn't it wonderful? [SIZE=1](amidst all crappy news about train and plane crashes, hurricanes, bankruptcies etc... what a time to look for a PR for the Mersennes, huh?)[/SIZE]

bdodson 2008-09-16 12:19

[QUOTE=Batalov;142584]For what it's worth, I've seen "small" moduli before (for 7,384+); had the same feeling; everything finished fine. Of course it was a much smaller matrix ~5M.[/QUOTE]

Batalov wins. Both the 6th and the 7th dependency were fine. :smile:
More to follow. -Bruce (with a coyness rep to maintain, and cores
that need their morning feeding.)

Batalov 2008-09-16 17:39

Bruce, what did I win? what did I win? :spot:
I did say "for what it's worth" :smile:... but I did forget to mention that it was 7 or 8 dependencies, as well. I put "small" in quotes not to make fun of their size or any statement; I meant "small" in your sense. I can check but in [I]really small[/I] factorizations I've seen [I]really small[/I] moduli like 73, IIRC.

--Serge

bdodson 2008-09-16 23:23

[QUOTE=Batalov;142754]Bruce, what did I win? what did I win? :spot:
I did say "for what it's worth" :smile:... but I did forget to mention that it was 7 or 8 dependencies, as well. I put "small" in quotes not to make fun of their size or any statement; I meant "small" in your sense. I can check but in [I]really small[/I] factorizations I've seen [I]really small[/I] moduli like 73, IIRC.

--Serge[/QUOTE]

Best I can offer is a reply with the factorization report. This
was 7,311+ c258, which factors as p75*p184, with p75 =

476030545640946917170204607283392215292272721653549541840445676894357361141

This is our second largest completed matrix, 17184393 x 17184641,
after the 18.7M^2 that Richard did from Greg's 12,241-. We have
just one larger matrix 17516286 x 17516534 pending for 7,313+.
Two subsequent harder numbers than 7,311+ were over-sieved
so as not to have such large matrices, 16180785 x 16181033 for
7,313- and 14392377 x 14392625 for 6,392+ (this was the one with
237M nonduplicate relations).

I hope Jason's observation

[QUOTE]
the size of multiple-precision numbers is limited to 27*32 bits,
which adds a few guard digits above 255 digits [/QUOTE]

doesn't suggest addtional difficulties; those are

[code]
7,313- c248 Childers/Dodson
6,392+ c262 Childers/Dodson
7,313+ c264 Childers/Dodson
5,389+ c265 Childers/Dodson [/code]

with matrices for 313+, 313- and 6,392+ (in that order)
running at Greg's, and 5,389+ sent back for more sieving
to avoid the 21M^2 that resulted from 200M (hard to get!)
nonduplicate from 5,389+.

Thanks to all for comments on my sqrt worries; most especially
the "try a few more dependencies" that resulted in the p75*p184.
-Bruce

jasonp 2008-09-17 02:08

Msieve v1.36 had a limit of 27*32 bits; v1.37 has a limit of 29*32 bits. If an input was too large the library would refuse to run.

Batalov 2008-09-17 02:23

1 Attachment(s)
[quote=bdodson;142829]Best I can offer is a reply with the factorization report...[/quote]I was hoping that I deserved some cracking prize :-(

[SIZE=1](Seriously, the 7+ thread is probably waiting for a few of these messages moved...)[/SIZE]

:popcorn: The result is actually cool. :bow:

jasonp 2008-09-17 02:28

[QUOTE=Batalov;142848]I was hoping that I deserved some cracking prize :-(
[/QUOTE]
Well, there's [i]technically[/i] nothing stopping anyone from using a Cunningham number as an RSA key...

frmky 2008-09-17 06:00

[QUOTE=bdodson;142829]
doesn't suggest addtional difficulties; those are
[/QUOTE]

No worries. We're still using 1.36 on these, but "modified" to support 30*32 bits.

Greg

Batalov 2008-09-17 22:40

Gosh darn it. I did it again. 178-178.8M.bz2 range with [I]14e[/I]. It is half of the potential data (only 0.5 mil relations), but I will ftp it anyway (it's better than 0 relations!) - but I will rerun with [I]15e[/I] and ftp over that same file, later.

Sorry for lagging so slowly and for mixups (I was sieving some 2LMs and used the same siever here).


EDIT: (below, that is very true). I will finish 178.8-180M properly and then will take a few more mil Q as a pennance, instead.

Wacky 2008-09-17 23:09

[QUOTE=Batalov;142974]Gosh darn it. I did it again. 178-178.8M.bz2 range with [I]14e[/I]. It is half of the potential data (only 0.5 mil relations)[/QUOTE]

Actually, it is probably better to simply make a note of the situation and move on.

You will lower your effective sieving rate (no value in generating a relation that has already been found) more than the reduction caused by sieving in a less attractive region.

bsquared 2008-09-18 18:02

308-328 is almost done, but I won't be able to upload it until sunday or monday.

reserving 55M-75M and 328M-338M

- ben.

bsquared 2008-09-18 18:20

I've mentioned this in a different thread, but I'll bring it up again here... if anyone has the source code for the 64bit AMD optimized lattice sievers, can you please let me know? Google hasn't turned anything up, but I thought they were public domain. I'd like to try and build them. Thanks.

p.s. If they aren't public domain, I apologize. But if anyone knows for sure one way or the other, let me know so I can stop looking :)

jasonp 2008-09-19 01:16

[QUOTE=bsquared;143088]I've mentioned this in a different thread, but I'll bring it up again here... if anyone has the source code for the 64bit AMD optimized lattice sievers, can you please let me know? Google hasn't turned anything up, but I thought they were public domain. I'd like to try and build them. Thanks.
[/QUOTE]
I think several people here have the source, but I don't know of anywhere that it's been posted.

xilman 2008-09-19 08:13

[QUOTE=bsquared;143088]I've mentioned this in a different thread, but I'll bring it up again here... if anyone has the source code for the 64bit AMD optimized lattice sievers, can you please let me know? Google hasn't turned anything up, but I thought they were public domain. I'd like to try and build them. Thanks.

p.s. If they aren't public domain, I apologize. But if anyone knows for sure one way or the other, let me know so I can stop looking :)[/QUOTE]I'll check what I have. If it's GPL I'll mail it to you. Whether you'll be able to use it is another matter entirely.

Paul

Batalov 2008-09-19 20:42

Done with my interval and will take a small extra chunk.
53-55M.

bsquared 2008-09-22 13:07

[quote=xilman;143151]I'll check what I have. If it's GPL I'll mail it to you. Whether you'll be able to use it is another matter entirely.

Paul[/quote]

Thanks for checking!

p.s. 308-328 is uploaded now.

xilman 2008-09-22 17:18

[QUOTE=bsquared;143411]Thanks for checking![/QUOTE]I'll mail you what I have because it's GPLed.

You may be too young to remember what Bell Labs sent out with a Unix license, software distribution on a magtape and a set of manuals in 3-ring binders as their entire tech support, so I'll repeat it here:[quote]Good luck.[/quote]


Paul

bsquared 2008-09-23 00:21

[quote=xilman;143423]I'll mail you what I have because it's GPLed.

You may be too young to remember what Bell Labs sent out with a Unix license, software distribution on a magtape and a set of manuals in 3-ring binders as their entire tech support, so I'll repeat it here:


Paul[/quote]

Magtape? Yes, that's a bit before my time. But be it magtape or tarball via email the sentiment is appropriate :)

I've managed to get everything to build, however, so I'm on the right track. But input_poly.c and gnfs-lasieve4e.c have not been ggnfs-ified, so it chokes on my input poly files. I'm hacking that code now...

- ben.

Shaopu Lin 2008-09-23 05:26

[QUOTE=bsquared;143458]Magtape? Yes, that's a bit before my time. But be it magtape or tarball via email the sentiment is appropriate :)

I've managed to get everything to build, however, so I'm on the right track. But input_poly.c and gnfs-lasieve4e.c have not been ggnfs-ified, so it chokes on my input poly files. I'm hacking that code now...

- ben.[/QUOTE]

If you hack this code successfully, can you contribute this code to ggnfs?

bsquared 2008-09-23 13:51

[quote=Shaopu Lin;143480]If you hack this code successfully, can you contribute this code to ggnfs?[/quote]

I'm not sure what's all involved with that, but if a lot of stars happen to line up I would be fine contributing it. I've modified input_poly.c and gnfs-lasieve4e.c to play nice with ggnfs input poly files, and the gnfs-lasieve4I*e family built fine after a bit of tinkering, but when I run it I get a segfault. It'll take some work to track down the problem, assuming it's not something in the assembly code (if so, I'm doomed).

- ben.

WraithX 2008-09-23 23:08

[QUOTE=bsquared;143517]I'm not sure what's all involved with that, but if a lot of stars happen to line up I would be fine contributing it. I've modified input_poly.c and gnfs-lasieve4e.c to play nice with ggnfs input poly files, and the gnfs-lasieve4I*e family built fine after a bit of tinkering, but when I run it I get a segfault. It'll take some work to track down the problem, assuming it's not something in the assembly code (if so, I'm doomed).

- ben.[/QUOTE]

Are you compiling gnfs-lasieve4e.c in a windows environment [specifically MinGW, or Cygwin with -mno-cygwin]? If so, I'm seeing that executable crash on me too. However, it's not because of a segfault, it is stopping when it hits a "td error: 152641 does not divide" error. Could you let me know if you make any progress on this?

Or, if you're not compiling gnfs-lasieve4e.c in a Windows environment [or in a regular Cygwin environment], then please ignore this post.

bsquared 2008-09-24 02:13

[quote=WraithX;143578]Are you compiling gnfs-lasieve4e.c in a windows environment [specifically MinGW, or Cygwin with -mno-cygwin]? If so, I'm seeing that executable crash on me too. However, it's not because of a segfault, it is stopping when it hits a "td error: 152641 does not divide" error. Could you let me know if you make any progress on this?

Or, if you're not compiling gnfs-lasieve4e.c in a Windows environment [or in a regular Cygwin environment], then please ignore this post.[/quote]

No, it's not a windows environment, although if time allows someday I'd like to try that with the amd optimized code. I've built the ggnfs suite on mingw, but only using the generic lasieve code.

bsquared 2008-09-24 14:48

328-338 is done and uploaded. 55-75 will be done in ~40 hours. Those, and 308-328, and the rest of the outstanding ranges, are about 90MQ or so above the last filtering run. We should be getting close!

I'll reserve 338M-350M. Hopefully that will be it.

Is your new machine ready for the LA?

FactorEyes 2008-09-24 16:22

[QUOTE=bsquared;143651]
Is your new machine ready for the LA?[/QUOTE]
Enquiring minds want to know -- this is the whole reason I check this thread.

I am so glad this delightful matrix is not my problem.

bsquared 2008-09-24 16:45

[quote=bsquared;143651]

I'll reserve 338M-350M. Hopefully that will be it.

[/quote]

Change that from 338-350 to 40-53. It will start after 55-75 is done, so if you think this range isn't needed, let me know in the next couple days. 40M-53M should take ~5 days.

fivemack 2008-09-24 17:04

The new machine seems pretty happy now that its motherboard and processor are compatible. It's done a couple of teeny 29-bit matrices.

I've reserved 338-340 and put up the 'WE'RE DONE' sign ... hope I can start the matrix crunching before I go off to CADO.

Batalov 2008-09-24 17:07

"Use the 1.38, Luke! Use the 1.38!"

Andi47 2008-09-24 17:18

[QUOTE=fivemack;143667]The new machine seems pretty happy now that its motherboard and processor are compatible. It's done a couple of teeny 29-bit matrices.

I've reserved 338-340 and put up the 'WE'RE DONE' sign ... hope I can start the matrix crunching before I go off to CADO.[/QUOTE]

When is your deadline? Oct. 6th?

I think my range will take me one more week, I hope I can finish it before deadline.

smh 2008-09-24 20:06

I'm a bit behind on schedule on my last reservation. When do you need the relations?

fivemack 2008-09-24 20:52

I'd like to do the filtering on the weekend 4th October (IE ten days from now); is that an unreasonable deadline?

jasonp 2008-09-25 01:23

Uploaded 427k more relations from line sieving (lines 1201-1700)

bsquared 2008-09-26 04:04

range 55M-75M is done. One chunk of data got lost somehow on the filesystem, however 25521886 relations did not and are in transit as I type. My final range should be done well before the 4th.

enjoy.

- b.

Batalov 2008-09-26 09:55

53-55M will be there in a few hours, too.

bsquared 2008-09-30 20:02

40M-53M is done and uploaded.
It'll be interesting to see what the matrix size is after the next filtering...

fivemack 2008-09-30 21:37

Thanks, got 40-53.

I too am interested in seeing what the matrix size is, but the filtering takes a couple of days, so I'd rather do it only once, so I'd like to wait until smh and andi47's relations are in before I start it - also that'll give me time to finish 10^259+1.

bsquared 2008-09-30 21:52

[quote=fivemack;144156]Thanks, got 40-53.

I too am interested in seeing what the matrix size is, but the filtering takes a couple of days, so I'd rather do it only once, so I'd like to wait until smh and andi47's relations are in before I start it - also that'll give me time to finish 10^259+1.[/quote]

Of course, no hurry. I suspect no one involved in a project like this is in a terrible rush :wink:.

Andi47 2008-10-01 05:07

[QUOTE=fivemack;144156]Thanks, got 40-53.

I too am interested in seeing what the matrix size is, but the filtering takes a couple of days, so I'd rather do it only once, so I'd like to wait until smh and andi47's relations are in before I start it - also that'll give me time to finish 10^259+1.[/QUOTE]

I think I can upload my relations tomorrow or on friday.

smh 2008-10-01 07:12

[QUOTE=fivemack;144156]..... so I'd like to wait until smh and andi47's relations are in before I start it[/QUOTE]I don't think i can finish the full range. I can upload the relations i have so far any time you want. I planned to do this friday the 3rd.

fivemack 2008-10-01 10:55

[QUOTE=smh;144198]I don't think i can finish the full range. I can upload the relations i have so far any time you want. I planned to do this friday the 3rd.[/QUOTE]

Friday 3rd will be fine.

bsquared 2008-10-01 13:13

[quote=smh;144198]I don't think i can finish the full range. I can upload the relations i have so far any time you want. I planned to do this friday the 3rd.[/quote]

How much will be left over? I can help out if you want.

Andi47 2008-10-01 14:51

[QUOTE=bsquared;144207]How much will be left over? I can help out if you want.[/QUOTE]

For me, approx. 139.64M to 140M will be left over.


All times are UTC. The time now is 23:26.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.