mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Aliquot Sequences (https://www.mersenneforum.org/forumdisplay.php?f=90)
-   -   Reserved for MF - Sequence 4788 (https://www.mersenneforum.org/showthread.php?t=11615)

Andi47 2010-02-10 19:22

[QUOTE=jrk;205221]I checked. 26 appeared to have a slight edge.

[/QUOTE]

:shock:

I am quite surprised about this.

Have you checked smaller numbers than c128 (i.e. ~c112 to c125) for lpb 26 / 27? Is it always that close?

FactorEyes 2010-02-10 20:48

[CODE]
lpbr: 26
lpba: 26
mfbr: 52
mfba: 52[/CODE]

I have seen few people set mfbX to twice lpbX for jobs of this size, but that doesn't mean you're doing it wrong.

The belief in using 27-bit large primes for gnfs jobs starting at around 110 bits may stem from the original def-par.txt file, which doesn't always have mfbr=2*lpbr and mfba=2*lpba.

These estimates may have been influenced by the nature of hardware several years back. Or not.

Likewise, it may be smarter now to use twice the number of bits for mfbX as for lpbX, even for these smaller gnfs jobs. Or not.

As should be obvious, I have more questions than answers here. I played around with differing mfbr:lpbr and mfba:lpba ratios a while back, and the ones in def-par looked pretty good for these sizes of number, but I didn't have the smarts to try lowering lpbr & lpba.

You may be on to something.

jrk 2010-02-10 21:27

[QUOTE=FactorEyes;205237][CODE]
lpbr: 26
lpba: 26
mfbr: 52
mfba: 52[/CODE]

I have seen few people set mfbX to twice lpbX for jobs of this size, but that doesn't mean you're doing it wrong.

The belief in using 27-bit large primes for gnfs jobs starting at around 110 bits may stem from the original def-par.txt file, which doesn't always have mfbr=2*lpbr and mfba=2*lpba.[/QUOTE]


I should mention that in the trials of 27 vs 26bit I posted above, the mfbr/a was 2x of lpbr/a in each.

But here is 27 lpbr/a with 52 mfbr/a, for comparison, again using 5M small prime limit and siever 13e:

[code]Warning: lowering FB_bound to 1999999.
total yield: 2516, q=2001037 (0.01147 sec/rel)
Warning: lowering FB_bound to 2499999.
total yield: 2442, q=2501003 (0.01221 sec/rel)
Warning: lowering FB_bound to 2999999.
total yield: 2723, q=3001001 (0.01172 sec/rel)
Warning: lowering FB_bound to 3499999.
total yield: 2390, q=3501041 (0.01221 sec/rel)
Warning: lowering FB_bound to 3999999.
total yield: 2792, q=4001003 (0.01256 sec/rel)
Warning: lowering FB_bound to 4499999.
total yield: 3201, q=4501001 (0.01276 sec/rel)
total yield: 2411, q=5001001 (0.01286 sec/rel)
total yield: 2319, q=5501003 (0.01334 sec/rel)
total yield: 2355, q=6001013 (0.01381 sec/rel)
total yield: 2500, q=6501007 (0.01409 sec/rel)
[/code]

27,52 is slower than either 27,54 or 26,52

So it appears that using mfbr/a of 2x of lpbr/a is good even for a number as small as c128.

Although it probably have more to do with the properties of the polynomial, rather than the number size.

For that matter I also had a polynomial of E 1.25e-10 with alpha -6.15 but it was slower by comparison to the one I posted.

[quote="Andi47"]Have you checked smaller numbers than c128 (i.e. ~c112 to c125) for lpb 26 / 27? Is it always that close? [/quote]
No I haven't checked.

axn 2010-02-10 22:22

[QUOTE=jrk;205243]27,52 is slower than either 27,54[/QUOTE]

Yes, but 27,52 requires fewer relations than 27,54 (something like 10%(?) less). So...

Batalov 2010-02-10 22:38

[quote=axn;205246]Yes, but 27,52 requires fewer relations than 27,54 (something like 10%(?) less). So...[/quote]
No, it doesn't. "52" only controls the internal mechanics of the siever. It should be chosen to optimize the speed.

Like one person was quoted saying about high-throughput screening of chemical compounds: when confronted with the question, "but your technique misses a lot of hits?" he answered "but it does that so much faster than the competitors!" :smile:

FactorEyes 2010-02-10 23:01

[QUOTE=Batalov;205250]No, it doesn't. "52" only controls the internal mechanics of the siever. It should be chosen to optimize the speed.[/QUOTE]

I believe that mfbr/a (thanks to jrk for the proper way to specify these two as one) is the largest remaining unfactored number (or norm) which the siever will bother to bust into large primes: above this size, it chucks that (a,b) combination.

Is this correct?

jrk 2010-02-10 23:57

[QUOTE=FactorEyes;205253]I believe that mfbr/a (thanks to jrk for the proper way to specify these two as one) is the largest remaining unfactored number (or norm) which the siever will bother to bust into large primes: above this size, it chucks that (a,b) combination.

Is this correct?[/QUOTE]

I see. I think you mean that, the smaller the unfactored part is after removing small primes, the smaller will be the large primes that come out of it. So not as many of them would be needed to build a matrix. If that's the case, though, I don't know what the proper formula is to determine how many needed relations should be deducted because of this. But in this case it must be at least about 7% less for changing mfbr/a from 54 to 52 for it to be any faster.

[quote="axn"]Yes, but 27,52 requires fewer relations than 27,54 (something like 10%(?) less). So... [/quote]
Where do you get that 10%?

If anyone wants to sieve the number, please go ahead and use some other parameters. I did not mean to start confusion.

axn 2010-02-11 00:32

[QUOTE=Batalov;205250]No, it doesn't. "52" only controls the internal mechanics of the siever.[/QUOTE]

Yeah, it does. With 27/52 you'll get a usable matrix with fewer relations.

[QUOTE=jrk;205262]Where do you get that 10%?[/QUOTE]

Quoting from memory based on past experience (+/-)

FactorEyes 2010-02-11 04:13

[QUOTE=jrk;205262]I see. I think you mean that, the smaller the unfactored part is after removing small primes, the smaller will be the large primes that come out of it. So not as many of them would be needed to build a matrix.[/QUOTE]

Actually, I have no idea.

If one drops the mfbr/a values relative to lpbr, then you'll pull fewer large primes, but spend less time fruitlessly factoring large chunks that will yield nothing anyway, and smaller cliques, with more found as a total percentage of relations found, but the relationships among all these are so zany that I really don't know the answer.

I'm just trying to pin down the precise definition of mfbr/a. Intuitively, it seems that a smaller mfbr/a would not make a difference, now that I think about it, but I have been wrong enough times about these things.

It seems to be time for a series of tests on GNFS 100 through GNFS 125 - one would need to actually sieve them to the matrix stage and compare the return.

We could use data for larger GNFS, and SNFS jobs. I might as well do some of this - it could take quite a while. Maybe I'll start with a aliquot sequence in the low 100s, and sieve each composite 3-4 different ways. Ugh.

[QUOTE=axn;205265]Yeah, it does. With 27/52 you'll get a usable matrix with fewer relations.

Quoting from memory based on past experience (+/-)[/QUOTE]

Seems to be little consensus on this stuff: this will be the first experiment I'll try.

Batalov 2010-02-11 05:59

It's true that this not clearly black-or-white.

With lower mfbX, one will get an [I]initial[/I] matrix with [I]fewer[/I] relations, but [I]later[/I] by CPU time; such sieving happens to be both slower and more redundant. Run tests to see.

Also, secondary consideration is that for larger projects it is of less importance that an [I]initial[/I] matrix may be built with fewer relations; in distributed projects, people systematically and [URL="http://mersenneforum.org/showthread.php?p=204756"]deliberately oversieve[/URL].

For this relatively low range, I will be not surprised if an optimal mfbX may be lpbX*2-1. Any volunteers to comprehensively check what is the best mfbX for gnfs-180? Such results will be undoubtedly most welcome.

henryzz 2010-02-11 07:33

I did a test a while back with small gnfs numbers. early 90s digit wise I think. having lower mfbX was very helpful for making a factorization faster time wise. The reason being I think was that it got the benefit from some of the larger large primes were not singletons which helped produce the matrix but were gained at very little extra cost of time. As long as relation production is slow enough to not mean the disk slows us then I think that as long as we have the full factorization available then the relation should be saved no matter how many large primes the number already has or the size of the largest large prime. It might not help much but as long as the cost is lower then it would be worth it. These changes dont need extra effort in factoring (a,b) pairs just more outputs.
How easy would it be to allow more large primes to be outputed from gnfs-lasieve?
Would switching to only two large primes for very small factorizations be helpful? I don't think so but it is worth trying.

bsquared 2010-02-11 14:58

... Meanwhile, our C128 is getting lonely. I'll do it, using 26/52. Since I can do it pretty fast, maybe I'll play around with complete factorizations using different settings as well, but lets move past this low hurdle first.

bsquared 2010-02-12 02:15

The C128 is done:

[CODE]prp42 factor: 831368657060398871067580635483867168226159
prp86 factor: 14930977716074183218202849037086987095402611116492933462800195088349992532423708467017
[/CODE]

now on a C160

jrk 2010-02-12 02:40

[QUOTE=bsquared;205423]now on a C160[/QUOTE]

[code]
Using B1=3000000, B2=5706890290, polynomial Dickson(6), sigma=2809319855
Step 1 took 11575ms
Step 2 took 4886ms
********** Factor found in step 2: 6977898771856757179216213182115242430943227
Found probable prime factor of 43 digits: 6977898771856757179216213182115242430943227
Probable prime cofactor 473028589072800985492970050195364334013736618114591097601498072758055162228629376693218157854131039061545633483665441 has 117 digits
[/code]

now c123

Batalov 2010-02-12 02:44

...a c124 (ready for gnfs)

oops, done. Now a c153.

jrk 2010-02-12 03:16

[QUOTE=Batalov;205426]...a c124 (ready for gnfs)[/QUOTE]

[strike]I will throw a couple hours of poly selection on it. Should get something usable tonight.[/strike]

edit: aborted

Andi47 2010-02-12 17:10

[QUOTE=Batalov;205426]Now a c153.[/QUOTE]

ECM anyone?

P-1: B1=1e9, B2=1e14, no factor.

ECM: I have queued 1000@11e6 this morning (=12 hours ago), no factor so far after ~420 curves.

Andi47 2010-02-12 21:05

P+1: 3* B1=1e9, B2=1e14, no factor.

Joshua2 2010-02-12 22:51

running coeff 60-2400 poly search just in case, we should prob do some more ecm though. upon 2nd thought not right now it slows my comp too much

Joshua2 2010-02-13 02:55

i tried running msieve -np 1,100 and it took 13 minutes and after lots of:
poly 0 p 226547647 q 237869767 coeff 53888836006288249 it said:
error generating or reading NFS polynomials

jrk 2010-02-13 03:08

[QUOTE=Joshua2;205525]i tried running msieve -np 1,100 and it took 13 minutes and after lots of:
poly 0 p 226547647 q 237869767 coeff 53888836006288249 it said:
error generating or reading NFS polynomials[/QUOTE]

First, msieve needs to batch the a5 values for it to work efficiently. Using a range from 1 to 100 will test only a single a5=60 (a5's are multiples of 60 in msieve). The default is to batch 40 a5's together so it would cover a range of 2400.

Secondly, it's been noticed that the internal parameters which msieve use are not very good for c153 and need a little adjustment. See the thread: [url=http://www.mersenneforum.org/showthread.php?t=12995]bad parameters for 153-digit CPU polsel ?[/url]

If you are able to build msieve yourself, you can try adjusting the line for c152 and c153 parameters (both, since this number is between those) in poly_skew.c like fivemack suggested, multiplying the stage2 bounds by 2 and the final bounds by 2/3.

Batalov 2010-02-13 03:24

First, it needs ~3-4K curves at 11e6. Then, let's try our best.

I will try cover a small chunk of the poly search space with -np 24000,28800

jrk 2010-02-13 03:33

[QUOTE=Batalov;205527]I will try cover a small chunk of the poly search space with -np 24000,28800[/QUOTE]

I will try as well; reserving -np 48060,60000.

Andi47 2010-02-13 06:35

162 more curves at B1=11e6 (total 582), no factor.

ECM anyone?? If I do this alone, it would take a week or so...

schickel 2010-02-13 06:54

[QUOTE=Andi47;205537]162 more curves at B1=11e6 (total 582), no factor.

ECM anyone?? If I do this alone, it would take a week or so...[/QUOTE]I've got some curves running. Unfortunately, I've got a c129 & a c133 occupying my two fastest systems....

frmky 2010-02-13 08:02

I've done 600 curves at B1=11e7, which basically completes testing for 45 digit factors. I'll leave it running overnight, which will complete about half of t50. I suggest you move to either B1=43e6 or 11e7.

schickel 2010-02-13 08:11

[QUOTE=frmky;205542]I've done 600 curves at B1=11e7, which basically completes testing for 45 digit factors. I'll leave it running overnight, which will complete about half of t50. I suggest you move to either B1=43e6 or 11e7.[/QUOTE]I can probably get ~30 curves an hour @ 43M. I'll leave it running overnight and see where we're at tomorrow.....

Andi47 2010-02-13 10:07

[QUOTE=frmky;205542]I've done 600 curves at B1=11e7, which basically completes testing for 45 digit factors. I'll leave it running overnight, which will complete about half of t50. I suggest you move to either B1=43e6 or 11e7.[/QUOTE]

eleventy-one (111) more curves at B1=11e6, no factor.

Switching to 43e6

Batalov 2010-02-13 18:16

I have a single poly overnight (still running):
[CODE]n: 385464482250950887345614598556545458998728163612585824428623191789206319774832929187814583420579189739060976007725889789586019426724128714659532651031321
# norm 6.309575e-015 alpha -7.524281 e 3.210e-012
skew: 10113003.85
type: gnfs
c0: -442863778759910408662290488760271660295
c1: 317171992445197854256912906861496
c2: -139225858766223144829716100
c3: 4264866481789867288
c4: 2108579173839
c5: 24300
Y0: -436588702930950821490055084506
Y1: 181016282146031377
rlim: 25000000
alim: 25000000
lpbr: 29
lpba: 29
mfbr: 58
mfba: 58
rlambda: 2.6
alambda: 2.6
[/CODE]
It is the unmodified gpu .exe from the site, so it is going to be probably producing very few single polys, but that's ok: one poly is better than none.
The limits are just a ballpark from the script.

frmky 2010-02-13 19:12

I've completed another 1200 curves at B1=11e7, which corresponds to roughly 2900 curves at 43e6, with no factor. I'll let it continue.

Andi47 2010-02-13 21:48

108@43e6, no factor.

Batalov 2010-02-13 22:08

Time to branch off this gnfs and start sieving?
The poly is quite good (call me lucky); cf. to that other thread's E for a c153 and I think that you may be convinced.

Also, I can do the algebra on it.

EDIT: It is above, -- all the same, the single one.
Running sims...
I suggest 14e, -a from 10M to 60M, with somewhat increased lims, here is the revised version:
[CODE]# sieve with 14e -a from 10M to 60M
n: 385464482250950887345614598556545458998728163612585824428623191789206319774832929187814583420579189739060976007725889789586019426724128714659532651031321
# norm 6.309575e-015 alpha -7.524281 e 3.210e-012
skew: 10113003.85
type: gnfs
c0: -442863778759910408662290488760271660295
c1: 317171992445197854256912906861496
c2: -139225858766223144829716100
c3: 4264866481789867288
c4: 2108579173839
c5: 24300
Y0: -436588702930950821490055084506
Y1: 181016282146031377
rlim: 33554430
alim: 33554430
lpbr: 29
lpba: 29
mfbr: 58
mfba: 58
rlambda: 2.6
alambda: 2.6[/CODE]

frmky 2010-02-13 22:58

Post the polynomial. Actually I can contribute significant sieving right now, so it might make sense for me to do the LA as well just in terms of the number of relations that need to be transferred. I have a Core 2 quad free that doesn't have enough memory for a NFS@Home LA.

Batalov 2010-02-13 23:06

P.S. you could quite possibly do it in the NFS @ Home framework in a day, plus a day for the algebra. The BOINC users who wanted the 14e siever (and programmed their preferences for it) will be pleased.


[COLOR=green]P.P.S. Can you make a replicate server for this (and future aliquot) job(s) and post instructions specifically to the enthusiasts of this forum? that would be a nice automatization of what happens anyway? Aliquot @ Home? :-)[/COLOR]

frmky 2010-02-13 23:10

[QUOTE=Batalov;205588]P.S. you could quite possibly do it in the NFS @ Home framework in a day, plus a day for the algebra. The BOINC users who wanted the 14e siever (and programmed their preferences for it) will be pleased.[/QUOTE]

Too much overhead. It would actually take a few days anyway. Many users queue a day or two of work, so their queues would need to clear before even starting on this one.

Edit: It looks like sieving on the cluster here would take about 2 days. I'll reserve the second half of the range, 35M-60M.

jrk 2010-02-14 00:15

[QUOTE=jrk;205529]I will try as well; reserving -np 48060,60000.[/QUOTE]

After 20.5 hours I found 41841 polys but the best was only E 3.064e-12.

I've stopped it now. Serge's poly is in the range that I was expecting for a good poly (~3.15e-12 to 3.5e-12) based on the available data. So it is good enough.

jrk 2010-02-14 01:17

[QUOTE=Batalov;205584]I suggest 14e, -a from 10M to 60M, with somewhat increased lims, here is the revised version:[/QUOTE]

That range of Q seems to be too large. My trial suggests that Q=10M to 40M would be sufficient with the parameters you gave: 29bit lp, 25bit sp and siever 14e.

[code]Warning: lowering FB_bound to 9999999.
total yield: 1516, q=10001009 (0.07294 sec/rel)
Warning: lowering FB_bound to 14999999.
total yield: 1676, q=15001001 (0.07745 sec/rel)
Warning: lowering FB_bound to 19999999.
total yield: 2258, q=20001001 (0.07231 sec/rel)
Warning: lowering FB_bound to 24999999.
total yield: 2107, q=25001029 (0.07948 sec/rel)
Warning: lowering FB_bound to 29999999.
total yield: 1711, q=30001003 (0.08549 sec/rel)
total yield: 1770, q=35001013 (0.08701 sec/rel)
total yield: 1753, q=40001021 (0.08816 sec/rel)
[/code]

If frmky is committed to doing Q=35M to 60M already then perhaps only another 5M range of the lower end would finish it off.

Batalov 2010-02-14 03:00

Yes, these are exactly the numbers that I've got too, averaging (conservatively) to 1.7/q-range. I expect 25% duplications and I targeted 60M unique relations (for a nice matrix). Factor in some drop-offs.

Greg will produce most relations and it is natural for him to do algebra, so it is for him to say when to stop. Something like 46M unique will be a minimum, so 10-40M range might have been barely enough (if duplication is lower than 25% which I doubt for 14e), but it is more overhead to spread with a tight estimate, then filter and say "sieve some more, guys". (Tight sieving also leads to [URL="http://mersenneforum.org/showthread.php?p=204759"]this[/URL].)

jrk 2010-02-14 05:52

[QUOTE=Batalov;205602]Yes, these are exactly the numbers that I've got too, averaging (conservatively) to 1.7/q-range. I expect 25% duplications and I targeted 60M unique relations (for a nice matrix). Factor in some drop-offs.

Greg will produce most relations and it is natural for him to do algebra, so it is for him to say when to stop. Something like 46M unique will be a minimum, so 10-40M range might have been barely enough (if duplication is lower than 25% which I doubt for 14e), but it is more overhead to spread with a tight estimate, then filter and say "sieve some more, guys". (Tight sieving also leads to [URL="http://mersenneforum.org/showthread.php?p=204759"]this[/URL].)[/QUOTE]
Hmm. 4788.2422 used 29 bit lp and siever 14e, but had a poly which produced *much* lower rel/Q than the one you've given for this number. It had a duplication rate of about 20% at the end of sieving.

I figured that since this poly has a much greater yield, that when there are enough relations for a matrix, the duplication rate will be somewhat less than 20% (but higher with over-sieving).

Would having 60M uniqs make a matrix which is enough smaller to recover from the extra sieving time?

The matrix for 4788.2422 was:
[QUOTE=bsquared;181375][CODE]matrix is 5525542 x 5525789 (1619.6 MB) with weight 419589043 (75.93/col)
sparse part has weight 369304539 (66.83/col)
[/CODE][/QUOTE]

schickel 2010-02-14 06:06

[QUOTE=Andi47;205580]108@43e6, no factor.[/QUOTE]Chalk up another 350 @ 43M with no factor.

schickel 2010-02-14 07:26

[QUOTE=schickel;205608]Chalk up another 350 @ 43M with no factor.[/QUOTE]And another 100 from a system reporting in late.....

10metreh 2010-02-14 07:38

Greg, could you possibly do the whole thing? It would save making a thread, going through the uploading procedure etc.

frmky 2010-02-14 17:45

[QUOTE=10metreh;205611]Greg, could you possibly do the whole thing? It would save making a thread, going through the uploading procedure etc.[/QUOTE]

That actually might be best at this point. I can finish the sieving by tomorrow afternoon anyway. Is anyone else sieving a range right now?

frmky 2010-02-16 10:29

[QUOTE=frmky;205645]I can finish the sieving by tomorrow afternoon anyway. [/QUOTE]
Linear algebra has started. ETA is late Wednesday.

frmky 2010-02-18 03:56

Sieving q=35-60M and half of q=10-25M yielded 55438411 relations. After adding 122359 free relations, 47532016 unique relations resulted. This produced a matrix, minus the first 48 rows, of 4230898 x 4231123 (1234.6 MB) with weight 319433968 (75.50/col). The matrix took 39.7 hours to solve using a 2.4 GHz Core 2 Quad. The first dependency yielded the factors:

prp75 factor: 117560800474237185708625257547574758024996437963669838128426872711607597881
prp79 factor: 3278852140305248654079854796296493405290720207444801807245306680433973717782241

Definitely out of reach of ECM.

Batalov 2010-02-18 03:59

Nice one!

Next, c158? 1e6 and p+-1 @ 3e8 are done.

frmky 2010-02-18 18:06

[QUOTE=Batalov;205971]
Next, c158? 1e6 and p+-1 @ 3e8 are done.[/QUOTE]

Using B1=110000000, B2=776278396540, polynomial Dickson(30), sigma=683354193
Step 1 took 633428ms
Step 2 took 181107ms
********** Factor found in step 2: 17335794135816327772963766917142503900385631736928141
Found probable prime factor of 53 digits: 17335794135816327772963766917142503900385631736928141
Composite cofactor 1189490960686935297257272800772356983024280201154200068853987897043760977639076215370405965982202953034487 has 106 digits

GNFS is running on the C106.
prp50 factor: 20274140516912084803282588212968042860978240596053
prp56 factor: 58670352003069986061451666975983758254072741101616497179

jrk 2010-02-18 21:03

I did 340 curves @ B1=11000000, B2=35133391030 on the line 2526 c128.

starting GPU poly search using -np 1,30000

frmky 2010-02-18 22:16

[QUOTE=jrk;206008]I did 340 curves @ B1=11000000, B2=35133391030 on the line 2526 c128.

starting GPU poly search using -np 1,30000[/QUOTE]

Sieving is already well underway. I'll have the factors by the end of the day.

frmky 2010-02-19 02:58

[CODE]prp51 factor: 388506058643583355437271607661833307213566452269017
prp77 factor: 32036182249699730203082985730853613751522637113555017710745413833553416908691
[/CODE]

jrk 2010-02-19 03:55

Now a c171

[code]193180597261434437130723427223452983749001196443861486431050101233095959046579376056784397572419692971642911818089261513816572456073452501892312801518858596361529181158803[/code]

Andi47 2010-02-19 05:57

[QUOTE=jrk;206046]Now a c171

[code]193180597261434437130723427223452983749001196443861486431050101233095959046579376056784397572419692971642911818089261513816572456073452501892312801518858596361529181158803[/code][/QUOTE]


This will need *lots* of ECM. My guess is somewhat more than t55, i.e. ~17700 @ 110e6 and a couple of curves at 260e6?

Raman 2010-02-19 07:18

If this 171 digit number had been prime, the aliquot sequence would have acquired
the downdriver now, right?

@frmky (Greg): How many systems did you make use of
in order to find out that p53 factor by using ECM,
crack that c106 within half an hour
crack that c128 within 5 hours of time

Only 13 more iterations count for sequence 314718 to reach (hit) 9000 iterations is being left over!
that is remaining actually...

frmky 2010-02-19 07:40

[QUOTE=Raman;206050]
@frmky (Greg): How many systems did you make use of
in order to find out that p53 factor by using ECM,
crack that c106 within half an hour
crack that c128 within 5 hours of time
[/QUOTE]
Eight 2.4GHz Core 2 Quads, totaling 32 cores.

10metreh 2010-02-19 07:55

[QUOTE=Raman;206050]If this 171 digit number had been prime, the aliquot sequence would have acquired
the downdriver now, right?[/QUOTE]

No, the prime would have to be of the form 4n+1. This c171 is of the form 4n+3.

Assuming the c171 splits into two factors, there is a 1/2 chance that it will split into primes of the form 8n+1 and 8n+3, losing the 2^3, and a 1/2 chance that it will split into primes of the form 8n+5 and 8n+7, keeping the 2^3. The good news is that it can't pick up a 3, but that wouldn't really matter if it got 2^4 * 31...

Andi47 2010-02-19 08:01

[QUOTE=frmky;206052]Eight 2.4GHz Core 2 Quads, totaling 32 cores.[/QUOTE]

I guess you also distributed the poly search between these cores? Do you use a script for doing this?

Edit:

[QUOTE=10metreh;206054]No, the prime would have to be of the form 4n+1. This c171 is of the form 4n+3.

Assuming the c171 splits into two factors, there is a 1/2 chance that it will split into primes of the form 8n+1 and 8n+3, losing the 2^3, and a 1/2 chance that it will split into primes of the form 8n+5 and 8n+7, keeping the 2^3. The good news is that it can't pick up a 3, but that wouldn't really matter if it got 2^4 * 31...[/QUOTE]

Don't jinx it! *knocking on wood*

btw: is there any chance to get the downdriver if the 2³ is lost?

frmky 2010-02-19 08:18

[QUOTE=Andi47;206055]I guess you also distributed the poly search between these cores? Do you use a script for doing this?
[/QUOTE]
In this case, no. But the ggnfs distribution includes scripts (search_a5) to distribute the poly search if you use pol5. To distribute msieve polsel, I just run in different directories and concatanate the .p files.

Joshua2 2010-02-19 08:24

i would think cpu time is better spent doing like ecm since gpus are faster at poly's I will do p-1 if no one else is.

Andi47 2010-02-19 08:38

[QUOTE=Joshua2;206058]I will do p-1 if no one else is.[/QUOTE]

To which bounds?

can you please save stage 1 (at the best done with the [COLOR="Blue"]-chkpnt filename[/COLOR] option) and post it, just in case if we want to increase the bounds later, that we don't need to re-do stage 1?


Edit2: 59@11e6, no factor

fivemack 2010-02-19 11:04

I'm running 2000@11e7 over the weekend. (raman: 60 hours Fri morning - Mon morning / 15 CPU-minutes per curve * 8 CPUs)

Joshua2 2010-02-19 16:39

[QUOTE=Andi47;206060]To which bounds? can you please save stage 1 (at the best done with the [COLOR="Blue"]-chkpnt filename[/COLOR] option) and post it[/QUOTE]
Ok, doing that to 1e10 for now I guess. I guess I'll do a run with pp1 to 1e9.

Joshua2 2010-02-19 18:23

that factors database should let you edit or delete stuff if you make a mistake entering. I didn't actually do the ECM work posted lol.

frmky 2010-02-19 20:11

[QUOTE=fivemack;206063]I'm running 2000@11e7 over the weekend. (raman: 60 hours Fri morning - Mon morning / 15 CPU-minutes per curve * 8 CPUs)[/QUOTE]

I've run 3000@11e7 so far on 54 cores. No factors yet.

Joshua2 2010-02-19 22:09

1 Attachment(s)
file psave p+1 resume file at my gmp's max B1 bounds for archival purposes, running B2
file pp1 completed 1e9 default b2
file pm1 completed 1e10 crashed during b2 [U]someone else[/U] can try

running another pp1 at 4e9 and 1000 ecm 11e7

Andi47 2010-02-20 09:18

I will do p-1 stage 2.

Andi47 2010-02-20 10:32

[QUOTE=Andi47;206152]I will do p-1 stage 2.[/QUOTE]

Done to B2=1e15, no factor.

Andi47 2010-02-21 01:49

[QUOTE=Andi47;206159]Done to B2=1e15, no factor.[/QUOTE]

Extended this to B2=1e16, no factor.

Joshua2 2010-02-21 07:51

1 Attachment(s)
At what point do you extend B2 vs B1? I always just use default B2 and extend B1. anyone care to run B2 on this pp1? we can prob call this last 1 did 1 other with same bounds and one with a quarter.
also the other file psave from the last upload needs B2 run too. For some reason my desktop and laptop both crash doing B2. My laptop prob doesn't have enough memory, but my exe must be bad or something anyway ecm works fine so i'm just doing that.

Andi47 2010-02-21 09:05

[QUOTE=Joshua2;206240]At what point do you extend B2 vs B1? I always just use default B2 and extend B1. anyone care to run B2 on this pp1? we can prob call this last 1 did 1 other with same bounds and one with a quarter.
also the other file psave from the last upload needs B2 run too. For some reason my desktop and laptop both crash doing B2. My laptop prob doesn't have enough memory, but my exe must be bad or something anyway ecm works fine so i'm just doing that.[/QUOTE]

For running B2 to 1e15 (with your savefile which was at B1=1e10), I did:

[CODE]./ecm -nn -pm1 -maxmem 1700 -resume pm1 <alq4788.2527.txt 1e10 1e15 >>alq4788.2527.out[/CODE]

where alq4788.2527.txt contains the c171.

Note: the "-maxmem 1700" options makes ecm to chose the parameters in a way that it does not use more than 1700 MB RAM. (without this option, it would be faster, but would require *much* more RAM, which my laptop doesn't have.)

Note that I have set the B1 to 1e10 which is exactly the same as in the savefile: ecm will read the savefile, see that B1 is already at 1e10 and continue with B2 - the output looks like this:

[code]
GMP-ECM 6.2.3 [powered by GMP 4.3.1] [P-1]
Resuming P-1 residue
Input number is 193180597261434437130723427223452983749001196443861486431050101233095959046579376056784397572419692971642911818089261513816572456073452501892312801518858596361529181158803 (171 digits)
Using B1=10000000000-10000000000, B2=1043119049080936, polynomial x^1
Step 1 took 0ms
Step 2 took 2085694ms
[/code]

For extending B2 for example from 1e15 to 1e16 one can do:

[CODE]./ecm -nn -pm1 -maxmem 1700 -resume pm1 <alq4788.2527.txt 1e10 [B]1e15-1e16[/B] >>alq4788.2527.out[/CODE]

jrk 2010-02-21 23:52

Is it too early to start organizing a GPU poly search for the c171?

I will reserve -np 120060,144000

Andi47 2010-02-22 05:19

[QUOTE=jrk;206299]Is it too early to start organizing a GPU poly search for the c171?

I will reserve -np 120060,144000[/QUOTE]

How much ECM has been done so far?

frmky 2010-02-22 05:42

I've run 9400 curves at B1=11e7 so far.

Batalov 2010-02-22 05:58

That [I]is[/I] a lot, about ~t51 already.
So it's not too early.

[strike]I'll take msieve -np 24001,30000 chunk. No, probably won't have time.[/strike]
A 2.1e-13 poly would be nice.
(For another c171, I have a 1.7e-13 poly from 1-48000 and not satisfied.)

frmky 2010-02-22 06:08

[QUOTE=jrk;206299]
I will reserve -np 120060,144000[/QUOTE]
I've started 1 to 120000 on 4 GPUs.

Andi47 2010-02-22 06:15

[QUOTE=Batalov;206316]That [I]is[/I] a lot, about ~t51 already.
So it's not too early.
[/QUOTE]

Is my guess about right, that we need approx. t55 or t56?

Batalov 2010-02-22 06:38

It's 125 CPU* days to do a t55.
And 750 CPU days to do gnfs.
It all depends on how much one believes in luck, available resources etc...
If the ECM CPU time was free (PS3), why not even run a t60 I'd say.
____
*some CPU

jrk 2010-02-22 06:47

[QUOTE=Batalov;206316]A 2.1e-13 poly would be nice.[/QUOTE]
Or would it?

[code]
$ grep norm msieve.dat.p |sort -g -k7 -r|head
# norm 1.035357e-16 alpha -6.929733 e 2.847e-13
# norm 9.877354e-17 alpha -6.825118 e 2.760e-13
# norm 9.784672e-17 alpha -6.845675 e 2.750e-13
# norm 9.654567e-17 alpha -6.966033 e 2.721e-13
# norm 9.629041e-17 alpha -6.892626 e 2.719e-13
# norm 9.545591e-17 alpha -8.033764 e 2.686e-13
# norm 9.254898e-17 alpha -6.656010 e 2.655e-13
# norm 9.102714e-17 alpha -6.975543 e 2.621e-13
# norm 9.166923e-17 alpha -8.127207 e 2.615e-13
# norm 9.038645e-17 alpha -6.796628 e 2.607e-13
[/code]

Batalov 2010-02-22 07:01

Nice flare. I guess you can safely call me a man of low standards. :smile:

schickel 2010-02-22 08:36

[QUOTE=jrk;206299]Is it too early to start organizing a GPU poly search for the c171?[/QUOTE]What's it going to take to do the post-processing?

Batalov 2010-02-22 09:08

It may fit in 4Gb memory, barely, if oversieving is right. (a gnfs-174 needed about 5Gb.)
6Gb+ will be a safer bet. In any case, a 64-bit OS.
10-12M to a side matrix and 10 days BL, probably less on an i7.
Note: during BL the computer will be challenging for daily use.
Low standards will help to cope.

schickel 2010-02-22 09:39

1 Attachment(s)
[QUOTE=Batalov;206333]It may fit in 4Gb memory, barely, if oversieving is right. (a gnfs-174 needed about 5Gb.)
6Gb+ will be a safer bet. In any case, a 64-bit OS.
10-12M to a side matrix and 10 days BL, probably less on an i7.
Note: during BL the computer will be challenging for daily use.
Low standards will help to cope.[/QUOTE]Well, seeing as how I don't have an i7, I will offer my system up only if no one else steps forward with something more capable.....

As long as I can step the priority down on msieve, it should be fine, I don't play many of the latest FPS games.

[SIZE="1"]PS. Ignore the low WEI, that's because I'm running the stock MB graphics, not a super high octane, cutting edge GPU...[/SIZE]

fivemack 2010-02-22 10:33

2000@11e7 produced no factor, unsurprisingly.

I have computation coming out of my ears, but since my Internet download speed has dropped to 40kbytes/second it would probably take me as long to download the relations as to run the matrix :(

schickel 2010-02-22 11:00

[QUOTE=fivemack;206338]2000@11e7 produced no factor, unsurprisingly.

I have computation coming out of my ears, but since my Internet download speed has dropped to 40kbytes/second it would probably take me as long to download the relations as to run the matrix :([/QUOTE]We could always collect the relations and Express Mail them to you....hmmm, what do you suppose the customs value of umpteen million relations should be declared as?

Andi47 2010-02-22 11:39

[QUOTE=fivemack;206338]...since my Internet download speed has dropped to 40kbytes/second... [/QUOTE]

Why this?


BTW: If I counted correctly, we should now have a total of 11400 curves at 11e6.

fivemack 2010-02-22 13:32

[QUOTE=schickel;206339]We could always collect the relations and Express Mail them to you....hmmm, what do you suppose the customs value of umpteen million relations should be declared as?[/QUOTE]

40kb/sec = 3.4GB/day; it's probably a 31-bit problem so 250M relations at 50 bytes of gzip each would be a four-day download or post three DVDs. Not sure the i7 is enough faster than somebody else's Phenom to make this sensible, but if you thing it's worthwhile I'm prepared to do it.

Be a bit careful with customs values (IE post from within the EU), since the post office charges me £13.50 for the privilege of paying the customs fee however small that fee turns out to be.

fivemack 2010-02-22 13:33

[QUOTE=Andi47;206340]Why this?[/QUOTE]
No idea why the connection is dodgy. Three-year-old router out of magic smoke? Weasels nesting in the connection-box? Convincing BT to diagnose such things seems impossible.

[QUOTE=Andi47;206340]BTW: If I counted correctly, we should now have a total of 11400 curves at 11e6.[/QUOTE]

Mine at least are at 11e7, 110e6, as I think are frmky's.

Andi47 2010-02-22 14:41

[QUOTE=fivemack;206351]
Mine at least are at 11e7, 110e6, as I think are frmky's.[/QUOTE]

ooops - I meant to type 11e7, but the 6 is next to the 7 on the keyboard...

frmky 2010-02-22 17:46

[QUOTE=Andi47;206340]
BTW: If I counted correctly, we should now have a total of 11400 curves at 11e6.[/QUOTE]

I completed another 1100 curves overnight, so make that 12500 @ 11e7 so far.

Joshua2 2010-02-23 06:13

I'm mostly done with my 1000 curves.

jrk 2010-02-23 20:11

I've stopped the poly search. A bit of testing among the few best scorers reveals that this is my best poly:

[code]n: 193180597261434437130723427223452983749001196443861486431050101233095959046579376056784397572419692971642911818089261513816572456073452501892312801518858596361529181158803
# norm 1.035357e-16 alpha -6.929733 e 2.847e-13
skew: 42972295.61
c0: -53546854924607693903542081910433567264000
c1: 522292118959486321900977916817558660
c2: 37609110301045736156878149024
c3: -940271047170212912673
c4: -17420632831592
c5: 122160
Y0: -1095990843863198982888397501015033
Y1: 2645241704855349311
rlim: 67108863
alim: 67108863
lpbr: 30
lpba: 30
mfbr: 60
mfba: 60
rlambda: 2.6
alambda: 2.6[/code]

The sieving range is Q=20M to 120M with siever 15e. This gives about 25% over-sieving for a nice matrix.

[QUOTE=fivemack;206349]it's probably a 31-bit problem [/QUOTE]
Looks like 30-bit has an edge over 31-bit in the tests. Plus, if you are doing the post-processing, using 30-bit would save you a couple of days of downloading over your crippled line.

I also compared using small prime bounds of 2^26-1 vs 80M and found that 2^26-1 is slightly better overall while using less memory (220MB vs 262MB). So 2^26-1 seems to be a good fit.

Here are the numbers. Siever 15e was used for all.

30-bit large primes, small primes up to 80M:
[code]Warning: lowering FB_bound to 19999999.
total yield: 1267, q=20001001 (0.34316 sec/rel)
Warning: lowering FB_bound to 24999999.
total yield: 1374, q=25001029 (0.31809 sec/rel)
Warning: lowering FB_bound to 29999999.
total yield: 1090, q=30001003 (0.34557 sec/rel)
Warning: lowering FB_bound to 34999999.
total yield: 1030, q=35001013 (0.33344 sec/rel)
Warning: lowering FB_bound to 39999999.
total yield: 1624, q=40001021 (0.32260 sec/rel)
Warning: lowering FB_bound to 44999999.
total yield: 1620, q=45001001 (0.31757 sec/rel)
Warning: lowering FB_bound to 49999999.
total yield: 1375, q=50001037 (0.32487 sec/rel)
Warning: lowering FB_bound to 54999999.
total yield: 1114, q=55001003 (0.34378 sec/rel)
Warning: lowering FB_bound to 59999999.
total yield: 1005, q=60001021 (0.33864 sec/rel)
Warning: lowering FB_bound to 64999999.
total yield: 1570, q=65001019 (0.31476 sec/rel)
Warning: lowering FB_bound to 69999999.
total yield: 1105, q=70001011 (0.31744 sec/rel)
Warning: lowering FB_bound to 74999999.
total yield: 1433, q=75001001 (0.31854 sec/rel)
total yield: 1389, q=80001017 (0.33279 sec/rel)
total yield: 1469, q=85001011 (0.34975 sec/rel)
total yield: 1324, q=90001003 (0.34190 sec/rel)
total yield: 1249, q=95001001 (0.35729 sec/rel)
total yield: 1064, q=100001029 (0.37909 sec/rel)
total yield: 1094, q=105001007 (0.36817 sec/rel)
total yield: 1544, q=110001053 (0.36108 sec/rel)
total yield: 1472, q=115001009 (0.37909 sec/rel)
[/code]

31-bit large primes, small primes up to 80M:
[code]Warning: lowering FB_bound to 19999999.
total yield: 2365, q=20001001 (0.18644 sec/rel)
Warning: lowering FB_bound to 24999999.
total yield: 2602, q=25001029 (0.16832 sec/rel)
Warning: lowering FB_bound to 29999999.
total yield: 2085, q=30001003 (0.18134 sec/rel)
Warning: lowering FB_bound to 34999999.
total yield: 2005, q=35001013 (0.17222 sec/rel)
Warning: lowering FB_bound to 39999999.
total yield: 3122, q=40001021 (0.17061 sec/rel)
Warning: lowering FB_bound to 44999999.
total yield: 3159, q=45001001 (0.16656 sec/rel)
Warning: lowering FB_bound to 49999999.
total yield: 2676, q=50001037 (0.16730 sec/rel)
Warning: lowering FB_bound to 54999999.
total yield: 2230, q=55001003 (0.17565 sec/rel)
Warning: lowering FB_bound to 59999999.
total yield: 2039, q=60001021 (0.17098 sec/rel)
Warning: lowering FB_bound to 64999999.
total yield: 3043, q=65001019 (0.16521 sec/rel)
Warning: lowering FB_bound to 69999999.
total yield: 2134, q=70001011 (0.16562 sec/rel)
Warning: lowering FB_bound to 74999999.
total yield: 2748, q=75001001 (0.16663 sec/rel)
total yield: 2739, q=80001017 (0.17257 sec/rel)
total yield: 2987, q=85001011 (0.17572 sec/rel)
total yield: 2615, q=90001003 (0.17743 sec/rel)
total yield: 2485, q=95001001 (0.18384 sec/rel)
total yield: 2179, q=100001029 (0.18538 sec/rel)
total yield: 2142, q=105001007 (0.19237 sec/rel)
total yield: 3008, q=110001053 (0.18956 sec/rel)
total yield: 2982, q=115001009 (0.19126 sec/rel)
[/code]

30-bit large primes, small primes up to 2^26-1:
[code]Warning: lowering FB_bound to 19999999.
total yield: 1248, q=20001001 (0.33797 sec/rel)
Warning: lowering FB_bound to 24999999.
total yield: 1350, q=25001029 (0.31359 sec/rel)
Warning: lowering FB_bound to 29999999.
total yield: 1075, q=30001003 (0.33308 sec/rel)
Warning: lowering FB_bound to 34999999.
total yield: 1021, q=35001013 (0.32009 sec/rel)
Warning: lowering FB_bound to 39999999.
total yield: 1602, q=40001021 (0.31564 sec/rel)
Warning: lowering FB_bound to 44999999.
total yield: 1598, q=45001001 (0.31309 sec/rel)
Warning: lowering FB_bound to 49999999.
total yield: 1355, q=50001037 (0.32089 sec/rel)
Warning: lowering FB_bound to 54999999.
total yield: 1103, q=55001003 (0.33741 sec/rel)
Warning: lowering FB_bound to 59999999.
total yield: 990, q=60001021 (0.33479 sec/rel)
Warning: lowering FB_bound to 64999999.
total yield: 1538, q=65001019 (0.31241 sec/rel)
total yield: 1078, q=70001011 (0.31391 sec/rel)
total yield: 1362, q=75001001 (0.32081 sec/rel)
total yield: 1291, q=80001017 (0.33874 sec/rel)
total yield: 1375, q=85001011 (0.35340 sec/rel)
total yield: 1242, q=90001003 (0.34574 sec/rel)
total yield: 1172, q=95001001 (0.36020 sec/rel)
total yield: 990, q=100001029 (0.38524 sec/rel)
total yield: 1017, q=105001007 (0.37437 sec/rel)
total yield: 1441, q=110001053 (0.36577 sec/rel)
total yield: 1375, q=115001009 (0.38341 sec/rel)
[/code]

fivemack 2010-02-24 12:48

Unfortunately my line is now crippled to the point of not transmitting data at all, so somebody else had better do the post-processing.

bsquared 2010-02-24 14:13

I can do a lot of sieving on nights and weekends, but I can't commit a box for the length of time the LA would take...

FactorEyes 2010-02-24 16:44

Linear Algebra, huh?
 
I'll do the post-processing.

How difficult is it to set up a home ftp server on a Linux box? I don't have a static IP, so I don't think that will be a good idea.

Ideas?

fivemack 2010-02-24 16:49

Use something like dyndns.org to get yourself a fixed DNS (sudo apt-get install ddclient then runs a tool to update the DNS entry when your IP address changes), and sudo apt-get install ftpd to get yourself an FTP server.

FTP uploads will be cut off if the IP address changes in mid-stream, but otherwise shouldn't be too much trouble.

Andi47 2010-02-24 17:37

[QUOTE=fivemack;206547]Unfortunately my line is now crippled to the point of not transmitting data at all, so somebody else had better do the post-processing.[/QUOTE]

Is it possible that you have exceeded your data-volume - or what your provider defines as "fair use"? In this case, some providers tend to "cripple" the line at the end of the month (and open it again at the beginning of the next month).

If this is not the case, you should contact your provider - maybe some technical component is broken?

Edit: Have you tried to switch the router and/or the connection box off and on again? Are all cables connected firmly?

frmky 2010-02-27 06:58

It's been a busy week, so I've just let ECM and poly selection run. I've run a total of 19650 curves at B1=11e7, so more than t55 has been completed.

Also, I've run a total of 20 GPU-days of poly selection. None are better than that found by jrk. The ten best were

# norm 8.063916e-17 alpha -7.714490 e 2.416e-13
# norm 8.113333e-17 alpha -7.161245 e 2.417e-13
# norm 8.093939e-17 alpha -7.516656 e 2.418e-13
# norm 8.261360e-17 alpha -7.263537 e 2.432e-13
# norm 8.588117e-17 alpha -7.317368 e 2.450e-13
# norm 8.443225e-17 alpha -7.306960 e 2.487e-13
# norm 8.448700e-17 alpha -7.757378 e 2.491e-13
# norm 8.479696e-17 alpha -7.386898 e 2.493e-13
# norm 8.581372e-17 alpha -7.343609 e 2.506e-13
# norm 9.680355e-17 alpha -7.572856 e 2.651e-13

The best alphas were

# norm 6.844995e-17 alpha -8.609902 e 2.152e-13
# norm 6.659630e-17 alpha -8.490889 e 2.114e-13
# norm 6.794071e-17 alpha -8.442538 e 2.139e-13
# norm 7.482012e-17 alpha -8.405095 e 2.277e-13
# norm 6.880737e-17 alpha -8.397232 e 2.159e-13
# norm 7.153241e-17 alpha -8.374708 e 2.196e-13
# norm 7.307364e-17 alpha -8.363635 e 2.266e-13
# norm 6.955358e-17 alpha -8.347888 e 2.150e-13
# norm 6.390393e-17 alpha -8.335203 e 2.064e-13
# norm 6.457695e-17 alpha -8.331621 e 2.078e-13

10metreh 2010-02-27 07:30

I would start the team sieve thread now if it weren't for the still unresolved issue of relation transport. How are things getting on?


All times are UTC. The time now is 08:47.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.