[QUOTE=Dubslow;323082]I'll run some GPU stage 1. Do you know how many hits you got in those five hours?
Edit: Perhaps the ggnfs parameters are off? Edit2: YAFU (it's probably msieve's data) suggests around 50 CPU hours of poly select, so unless you were running 8 cores or something, that's probably a woefully bad poly. How many core hours total did you run?[/QUOTE] Thanks! It had suggested 54.59 CPUhours, but I had factmsieve.py set with a wall time of 5 hours with dual core, so it was definitely cut prematurely. The question would be the difference in working with a poor poly that was obtained at 5 hours vs. a better poly that took 30 hours. At a total of 50 hours estimate for sieving, I'm not sure if a better poly would have made it faster overall. There are 4985 polynomials in the test.dat.p file. Over the last five hours I've accumulated >2.5M relations across all my machines with this poly, so I should be able to reach the 22M requested in roughly 45 more hours. Unless restarting with a better poly would make up the difference, it probably isn't worth restarting for this size composite. But, for something larger, I should try to be more particular in my poly choice, I suppose. Edit: Don't tie up your systems for this. I'll get along with what I have here. I was just wondering... 
[QUOTE=EdH;323083]
Edit: Don't tie up your systems for this. I'll get along with what I have here. I was just wondering...[/QUOTE] I got 2.4 million stage 1 hits in 45 minutes, size opt running now, will have (probably a significantly better) poly sometime in the next hour or two. :smile: 
[QUOTE=Dubslow;323084]I got 2.4 million stage 1 hits in 45 minutes, size opt running now, will have (probably a significantly better) poly sometime in the next hour or two. :smile:[/QUOTE]
I'm not sure it would benefit me to start over at this point, but I would like to compare the results between our two poly's on a couple of my machines... 
Holy [i]bejeesus[/i], root opt takes forever. Out of the 2.4M hits I got 860K size opt polys (took about an hour on cpu vs 45 minutes for stage 1 on gpu). I sorted them and am running root opt on the best 86K of them; 2+ hours in and I have 12.7K root opted polys, so who knows how long it'll take. I'll post the best in the morning, but in the meantime, there's probably no way it'll be advantageous to switch.

You're processing too many size optimized polys, only 12% of them are useful :wink:
And using the outoftree MPI patch makes root opt scale nearlinearly with the number of cores. 
[QUOTE=debrouxl;323091]You're processing too many size optimized polys, only 12% of them are useful :wink:
And using the outoftree MPI patch makes root opt scale nearlinearly with the number of cores.[/QUOTE] Ah crap, now I know for next time. Those 10% took 19 hours :razz: [code]polynomial selection complete R0: 1275730550910569740948790967 R1: 68465311488443 A0: 602386029120087908552165345576724184 A1: 511077501414883308580053270504 A2: 163159912488603352570952 A3: 74272054904151261 A4: 11596854544 A5: 1260 skew 4253756.85, size 2.156e13, alpha 7.219, combined = 2.577e11 rroots = 5 elapsed time 19:32:47[/code] I'm not exactly sure how to get second or third place poly easily, since it's more than one line per poly. Edit: And it's not even as good as the first one :razz: :razz: (Edit2: Since the first one sieves so poorly, maybe this one sieves better despite the scores) 
I will have to test your poly for a comparison on one of my machines, but I'm going to wait until the current operation is completed.
Thanks for the extra work you did. 
LA says a few more hours...
I compared the two polynomials by running them side by side in one of my dual core machines: My poly  Total time: 9:38:09  Total yield: 463002 Dubslow's poly  Total time: 9:46:48  Total yield: 529197 Would there have been any appreciable affect on the LA stage between the different sets of relations? 
OK, looks like that one finished and now I'm running a c106 that should be done soon...

A few more lines have been added, but it now has a c161. If it doesn't break down some by ECM, I don't think I can do a c161 with my ancient hardware.:sad:

ECM came through and knocked it down a bit further, adding a line and settling on a c138 currently. I'll run with it unless someone else wants it...

I've gotten it to a c164. That's too large for my attention span.:smile:
I am running some ECM, but at that size, I'll leave it for the "big guns" if it survives much longer... 
In case no one has noticed, it is now a c162, which I am not pursuing at this time...

I'll run some ecm curves on it.
yoyo 
[QUOTE=yoyo;325159]I'll run some ecm curves on it.
yoyo[/QUOTE] Just to let you know I've been haphazardly running ECM on it. I have well over 4000@43e6 completed... 
[QUOTE=EdH;325165]Just to let you know I've been haphazardly running ECM on it. I have well over 4000@43e6 completed...[/QUOTE]
Me too ;) and additional 9000 curves @ 110e6. 
[STRIKE]c144[/STRIKE]
[STRIKE]c170[/STRIKE] c165 ? 1e6 done, 3e6 done, 11e6 ... > please run 6550 x 43e6 (have 1000 x 43e6) 
400 and counting @43e6...

I've added +1000 x 43e6. Time for big iron ECMing. yoyo?

100 curves @ 43e6 done

+1600@43e6 (total 2000 here), and counting...

I injected 18000 curves @110M, should soon be visible on the stats page. As soon as the first curves return.
yoyo 
I've started polynomial selection.

+1000@43e6 (total 3000 here), and counting...

+1000@43e6 (total 4000 here), and counting...

+1000@43e6 (total 5000 here), and counting...
That's over 7000 showing in the thread, so far, but I'll let my ECM keep running until jrk posts a polynomial. Then I can see if my relations scripts are now correct. 
Yoyo have done over 12500 curves @ 110.000.000
[URL="http://factorization.ath.cx/sequences.php?se=1&eff=2&aq=4788&action=last20&fr=0&to=100"]C165_4788_i5144[/URL] (B1 = 110000000, 12515 / 18000 curves) 
The C129 at iteration 5151 is ready for NFS.

Well, I'd like to whittle down my ignorance a little more, if possible...
I have the following polynomial for the current c129: [code] # norm 2.687898e12 alpha 7.317079 e 1.065e10 rroots 3 skew: 473579.03 c0: 27988440920005007752835908970188 c1: 36194624374531631102222182 c2: 2390917326532765272777 c3: 11161597941910483 c4: 11576524265 c5: 6600 Y0: 9803487406814689417760025 Y1: 20246307534533 [/code]Where (how) do I get (create) the other values? I thought I had a link to a thread that explained the procedure, but alas, I can't find it.:sad: Nor, can I find one via the search terms I tried... All my prior work was done by letting jrk, Aliqueit and factmsieve.py do the "thinking" part. But, I really need to learn some more of the actual steps, if possible. A steer toward the right thread would be appreciated. Then I'll crack the c129 as my homework... 
I've coded some stuff years ago into the factMsieve.pl so I use it (or else I would have forgotten);
you put the initial poly (add n: of course, and [B]type: gnfs[/B]), into a file, say, t.poly, and run factMsieve.pl t and kill it soon. Then copy the autogenerated parameters from the .job file (below, [B]in bold[/B]) to the poly to get something like this: [CODE][B]n: 597647818789865479070164232377390690874759339620592468154446562002337248831301293863065913519522870900965788830826787436105784791[/B] [B]type: gnfs[/B] # norm 2.687898e12 alpha 7.317079 e 1.065e10 rroots 3 skew: 473579.03 c0: 27988440920005007752835908970188 c1: 36194624374531631102222182 c2: 2390917326532765272777 c3: 11161597941910483 c4: 11576524265 c5: 6600 Y0: 9803487406814689417760025 Y1: 20246307534533 [B]rlim: 6800000[/B] [B]alim: 6800000[/B] [B]lpbr: 28[/B] [B]lpba: 28[/B] [B]mfbr: 55[/B] [B]mfba: 55[/B] [B]rlambda: 2.5[/B] [B]alambda: 2.5[/B][/CODE] (one last touch up is: because t.job file will have one of the lims lowered, reset both a/rlim values to the larger of the two). This is the simplest recipe without any variations. If the number is larger than this one, you can spend some time changing each parameter, and use some canned tricks like the 3LP trick, or the lopsided lpb (e.g. 29 and 30) for some quartics, etc etc etc. If the number is small, any time spent refining will be hardly compensated by simply shooting away the poly to a few computers with separate chosen ranges (controlled by [I]f[/I] and [I]c[/I]), and then, as usual, collecting all relations together, (optionally, [I]remdups[/I]), and then [I]msieve nc[/I]. 
Excellent! Thank you.
All worked as explained and my machines are sieving along rather well. I actually caught your unedited post and went off and achieved the same results as shown, before returning to find the rest. I think I'll work with this level for now before delving in any deeper. I have some scripts running all the machines for either ECM or sieving at the moment, but I'm looking at building a script to run the poly selection on all of them at once, too. Thanks again... 
[QUOTE=Batalov;330144]
(one last touch up is: because t.job file will have one of the lims lowered, reset both a/rlim values to the larger of the two).[/QUOTE] This feature could now be removed from factmsieve.pl as all the recent sievers will automatically lower the bound. This also means a little more efficiency from multicore runs as well as not needing to fix job files for standalone use. 
And the reward:
A c149, On the very next line... Thanks for the help in making this work manually! 
A c173!:max:
Now what? 
[QUOTE=EdH;330335]A c173!:max:
Now what?[/QUOTE] A great deal of ECM. 
And after ECM
Every once in a while, one must say "There are plenty more fish in the sea". 2^4*31 driver is not going away this time. :two cents: 
[QUOTE=Batalov;330340]And after ECM
Every once in a while, one must say "There are plenty more fish in the sea". 2^4*31 driver is not going away this time. :two cents:[/QUOTE] It might be time. There are lots of large sequences without a driver currently. It might also be fun to tackle some of the sequences with large powers of 2(some need nontrivial factorization). 
The 2^4 * 31 driver in 829332 broke at iteration 2873.

1000@43e6

[QUOTE=EdH;330414]1000@43e6[/QUOTE]I'll crank my picofarm up to 43e6, but my throughput is a little low.

1 Attachment(s)
[QUOTE=Batalov;330340]And after ECM
Every once in a while, one must say "There are plenty more fish in the sea". 2^4*31 driver is not going away this time. :two cents:[/QUOTE]:down: Well, it was a good run while it lasted. I looked back at my emails and found that I emailed Christophe back in February 2009 and recieved his reply on 2/26, so amazingly, we've worked on 4788 for exactly 3 years, starting at i2335. Maybe we could get some curves queued up through yoyo@home and then [URL="http://www.mersenneforum.org/showthread.php?t=17690"]ryanp[/URL] could do it if he gets bored with OPN factorizations....or we could move our team record up a little. We've gone up through a c172 in the last couple of years. 
+1000@43e6 (total: 2000)

+1000@43e6 (total: 3000)

P1 @ 1e9, nothing

+1000@43e6 (total: 4000)

I'll wait for more curves @43e6 until I load the big gun ;)

+1000@43e6 (total: 5000)

+1000@43e6 (total: 6000)
On the positive side, this number could eventually provide a record ECM factor.... 
+1000@43e6 (total: 7000)
How many @43e6 should be done? 
100 curves complete at 11e7.
[QUOTE=EdH;330875]+1000@43e6 (total: 7000) How many @43e6 should be done?[/QUOTE] Ed The readme that comes with ECM says 7500 for the default B2. I think now is a good time to move up to 11e7. What level of ECM is called for in this case? Is the rule of thumb 30%, which would be 52 digits or so? A T55 is just under 18k curves at 11e7 would 52digits be roughly onethird a T55? Curtis 
With yoyo@home on the job, a full t55 or even a 2t55 seems reasonable to me (assuming yoyo concurs :smile:)

Yup, with yoyo@home on board, we can easily reach at least t55.
Even if I once had the RSALS clients crunch on a GNFS 172 with 14e (using a poor polynomial, at that) as a stopgap measure during the fantastic power surge which occurred before we shut down, GNFS 172173 is significantly beyond the 14e/15e efficiency cutoff, so it is probably unreasonable for me to queue this C173 to NFS@Home's 14e... 
[QUOTE=debrouxl;330904]Yup, with yoyo@home on board, we can easily reach at least t55.
Even if I once had the RSALS clients crunch on a GNFS 172 with 14e (using a poor polynomial, at that) as a stopgap measure during the fantastic power surge which occurred before we shut down, GNFS 172173 is significantly beyond the 14e/15e efficiency cutoff, so it is probably unreasonable for me to queue this C173 to NFS@Home's 14e...[/QUOTE] Even if you could, there's nowhere this sequence will go but up. Based on discussion, I'm relatively certain we're all pretty bored of it. The only reason to factor this number would be to attempt a large(ish) team sieve  whose purpose would be defeated if NFS@Home ran the job. :smile: 
+1213@43e6 (total: 8213)
I'll move up to 11e7 
9500@11e7 done so far, and counting:
[url]http://www.rechenkraft.net/yoyo//y_status_ecm.php[/url] 
[QUOTE=EdH;330931]+1213@43e6 (total: 8213)
I'll move up to 11e7[/QUOTE]Plus 300 more from me. 
With the power of yoyo@home, is it even worth anything for me to run 11e7 curves? I'm only a little over 250 right now and one of my machines even refuses to do stage 2:
[code] > ___________________________________________________________________ >  Running ecm.py, a Python driver for distributing GMPECM work  >  on a single machine. It is Copyright, 2012, David Cleaver and  >  is a conversion of factmsieve.py that is Copyright, 2010, Brian  >  Gladman. Version 0.10 (Python 2.6 or later) 30th Sep 2012.  > _________________________________________________________________ > Number(s) to factor: > 17285154910805941577069464828335617544658066950627644021728302169526833018711670895092479561808160256160945139573800969912234390238908363042669550995167201537635764747005337 (173 digits) >============================================================================= > Working on number: 172851549108059415...537635764747005337 (173 digits) > Currently working on: job0137.txt > Starting 1 instance of GMPECM... > ./ecm c 100 110000000 < job0137.txt > job0137_t00.txt GMPECM 6.4.3 [configured with GMP 5.0.2] [ECM] Using B1=110000000, B2=776278396540, polynomial Dickson(30), 1 thread Done 0/100; avg s/curve: stg1 4432s, stg2 n/a s; runtime: 4772s > *** Error: unexpected return value: 9 [/code] 
[QUOTE=EdH;331011]With the power of yoyo@home, is it even worth anything for me to run 11e7 curves? [/QUOTE]
Not really. Even if you would like to find a record factor, the conditional probability of success of any curve is dropping down to zero as I am typing... (Conditional on that yoyo@home already ran 10^4 curves at this moment and didn't find a factor.) 
Will be 18000 curves @ 11e7 enough or is there more needed?
yoyo 
You could always ECM at a t80 level and go for a record ECM factorization :razz:

[QUOTE=Dubslow;331108]You could always ECM at a t80 level and go for a record ECM factorization :razz:[/QUOTE]
The trouble for me is that as I increase B1, more and more machines fall by the wayside. At 43e7 more than half tell me to get lost. And, I just can't see my attention span lasting for that long anyway... 
[QUOTE=yoyo;331087]Will be 18000 curves @ 11e7 enough or is there more needed?
yoyo[/QUOTE] If my estimation is correct (i.e. if I have no mistakes in my tables), ~2*t55 = 2*18k = 36k curves @11e7 are are needed for a c173. Edit: I just see that you have queued 42k@26e7. This might be a bit of trying for record factorization, but why not... 
No factor found so far with 40k curves @26e7.
yoyo 
Nothing
[CODE]GMPECM 7.0dev [configured with GMP 5.1.1, enableasmredc, enableassert, enableopenmp] [P1]
Input number is 17285154910805941577069464828335617544658066950627644021728302169526833018711670895092479561808160256160945139573800969912234390238908363042669550995167201537635764747005337 (173 digits) Using B1=100000000000, B2=484004602750364712, polynomial x^1, x0=4104192314 Step 1 took 24940729ms Step 2 took 40735246ms[/CODE] 
c173
I devoted several days on a GTX 460 card and got this poly. I think it is about average since Msieve states:
[CODE]expecting poly E from 2.27e13 to > 2.61e13[/CODE][CODE]# aq4788:5154 n: 17285154910805941577069464828335617544658066950627644021728302169526833018711670895092479561808160256160945139573800969912234390238908363042669550995167201537635764747005337 skew: 54453878.81 # skew 54453878.81, size 7.891e17, alpha 8.253, combined = 2.438e13 rroots = 5 Y0: 1984194768321507434058802020573021 Y1: 606427190915971723 c0: 17111183463229567250042987271571318169467520 c1: 5954767784533309908917895048435184628 c2: 48133715828031681374056493046 c3: 5548160975795079236918 c4: 18780066010531 c5: 562020[/CODE] 
[QUOTE=RichD;337945]I devoted several days on a GTX 460 card and got this poly. I think it is about average since Msieve states:
[CODE]expecting poly E from 2.27e13 to > 2.61e13[/CODE] [/QUOTE]So, have you come up with a plan on this? I could probably throw some cycles at it in a week or two, if you want to make it a team project. (I'm nearing the end of the sieving on a c168; maybe another week or so to go.) 
No plan. I know interest has waned recently. If people have a few cycles here or there that would be great. I'm not looking to coordinate (if it turns into) a sieving project.

[QUOTE=RichD;338013]No plan. I know interest has waned recently. If people have a few cycles here or there that would be great. I'm not looking to coordinate (if it turns into) a sieving project.[/QUOTE]Alright; if it turns out that there is any interest, I'll coördinate and do the post procressing.

Roughly how long a project is this? I think I'm willing to contribute a threadmonth on an i7windows64. Is that 5% of what's needed, or 1%?
I've never done a group sieve, but I have done a few OPN numbers up to 140 digits or so. 
I'd say this was about a 2.5 CPUyear job (32bit large primes, 15e, aim for 300 million relations). So one threadmonth would be around 34%.

I'll pledge 10, 12, 16M rels (initially). I just completed a 4 month SNFS project so a little burnt out on long winded runs.

[QUOTE=RichD;337945]I devoted several days on a GTX 460 card and got this poly. I think it is about average since Msieve states:[/QUOTE]
[QUOTE=VBCurtis;338489]Roughly how long a project is this? I think I'm willing to contribute a threadmonth on an i7windows64. Is that 5% of what's needed, or 1%? I've never done a group sieve, but I have done a few OPN numbers up to 140 digits or so.[/QUOTE] [QUOTE=fivemack;338495]I'd say this was about a 2.5 CPUyear job (32bit large primes, 15e, aim for 300 million relations). So one threadmonth would be around 34%.[/QUOTE]Are 32bit primes required? The biggest team job I see on a quick search is this one that took ~1.5 months for a c172 with these parameters:[code]rlim: 40000000 alim: 40000000 lpbr: 30 lpba: 31 mfbr: 60 mfba: 90 rlambda: 2.6 alambda: 3.6[/code][QUOTE=RichD;341064]I'll pledge 10, 12, 16M rels (initially). I just completed a 4 month SNFS project so a little burnt out on long winded runs.[/QUOTE]Unfortunately, this is probably going to be long winded.... 
[QUOTE=schickel;341084]Unfortunately, this is probably going to be long winded....[/QUOTE]
That's what I meant when I didn't want to take ownership. I'll jump in and out with rels if others are also willing. Any other pledgers? 
It may be more productive to switch to some other large orphan sequence: 1560? 5250? 1992? or 7920? or other?

[QUOTE=Batalov;341197]It may be more productive to switch to some other large orphan sequence: 1560? 5250? 1992? or 7920? or other?[/QUOTE]
I think there is enough interest, and outside resources, to pick up one or two of these sequences. Does anyone know where they currently stand? Some ECM, more ECM, ready for GNFS? I will try to get the ball rolling once a sequence and needs are defined. 
The next step would be to find the status from Christophe Clavier and associates ([url]http://mersenneforum.org/showthread.php?t=11625[/url]). For one reason or another, he keeps his webpages out of sync with the most current state and he doesn't update them in factordb; you can only find the state by contacting him. Do not assume [URL="http://christophe.clavier.free.fr/Aliquot/site/clavier_table.html"]empties[/URL] as unreserved; all of the empty entries by default are worked on by Christophe and could be much more advanced than shown.

Email sent to Christophe.

c154 @ i5155
2^4 * 3 * 31 * ... * c154
Thanks to [B]ryanp[/B] for providing the factors to the previous term. [CODE]initial square root is modulo 488687 sqrtTime: 2621 prp71 factor: 40931608477444520015384185592248168673427599963797426933989565874867503 prp102 factor: 422293566116048814823195598447158307752534698287857194151418405416542984156255510642680991466196324279 elapsed time 00:43:42[/CODE] 
ECM at B1=3e6 cut off an easy p26, leaving a C129.
EDIT: and the same ECM run ended up factoring the C129 as p37 * p93. EDIT2: at index 5156, the C168 is p29 * p139. 
[QUOTE=Batalov;345917]The next step would be to find the status from Christophe Clavier and associates ([URL]http://mersenneforum.org/showthread.php?t=11625[/URL]). For one reason or another, he keeps his webpages out of sync with the most current state and he doesn't update them in factordb; you can only find the state by contacting him. Do not assume [URL="http://christophe.clavier.free.fr/Aliquot/site/clavier_table.html"]empties[/URL] as unreserved; all of the empty entries by default are worked on by Christophe and could be much more advanced than shown.[/QUOTE]
Quite some time ago, someone (maybe Minigeek or schickel, or maybe it was you) got a current set from Christophe and I helped update factordb from that set. Perhaps it is time to try again? 
c130
>3500@11e6...

c130
~2000@43e6

[QUOTE=EdH;354183]~2000@43e6[/QUOTE]
It would be much faster to GNFS that c130 now, though. 
c130
I have several hundred @ 11e6 & 43e6 so we are well past the ECM cutoff.
Here is good poly if you want to feed it to factMsieve. Don't forget to sieve on the a side. [CODE]# expecting poly E from 8.06e11 to > 9.27e11 n: 2944319382357566177282558916865152338055975722560079892678869927700683205126419811544474585258973645778787423896577924588175555759 Y0: 7807975460998051621425545 Y1: 124987740105437 c0: 136230532601956965336407837538 c1: 13853835188349128321236775 c2: 239815727022408052905 c3: 13870607491614795 c4: 31214402812 c5: 101460 # skew 93540.14, size 1.760e12, alpha 6.069, combined = 8.432e11 rroots = 5 type: gnfs skew: 93540.14[/CODE] 
c130
225@11e7
I fired up gnfs, but I may not be available at the necessary time(s) to get the factors posted promptly. Thanks for the poly... 
Bummer! I estimated 16M relations needed, but 17M failed, so I'll be a bit later in my factors since it will be a while before I get back to the computers...
Edit: 18M has succeeded  later... 
EdH,
I'll be happy to run this parallel to yours. If you get back and generate the factors before I do, it's not a big deal on my end. 
c130
Sorry for the delay:
[code] prp60 factor: 138437674900218458488404172614552623270398011787245730849649 prp71 factor: 21268194402136119375711807584813908276275042421089194658034556064207391 [/code] BTW, anybody else that wants to "beat me to the punch anytime," feel free...:smile: 
Hahaha, I thought your delay would be longer.

c166
>4000@11e6
~2000@43e6 
c166
pm1 6e9  nothing
1200 @ 11e6  nothing > 900 @ 43e6 
c166
>2000@11e7
~700@2e8 
c166
> 3120 @ 43e6
> 510 @ 11e7 > 20 @ 26e7 
c166
current totals:
2186@11e7 2956@2e8 
c166
3766 @ 43e6  nothing
> 2060 @ 11e6 > 130 @ 26e7 
c166
current total:
4027@2e8 
I think it's now time for starting NFS poly search alongside ECM. I could queue the C166 to NFS@Home.

All times are UTC. The time now is 10:27. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2022, Jelsoft Enterprises Ltd.