[QUOTE=bdodson;352500]OK, never mind the polyn search for 11,323+ C221:
[/QUOTE] Damn! Here's the difference between ECM factoring and ECM pretesting. :smile: How about 3,697+? 
If I might ask a slightly stupid question, does "t55" refer to running the number of curves suggested by GMPECM to find a factor of a given digit length? If so, does "7t55" refer to doing this 7 times? Thanks!

[QUOTE=wombatman;352506]If I might ask a slightly stupid question, does "t55" refer to running the number of curves suggested by GMPECM to find a factor of a given digit length? If so, does "7t55" refer to doing this 7 times? Thanks![/QUOTE]That's how I use the term.

[QUOTE=fivemack;352279]From trial sieving it looks like firejuggler's [URL]http://www.mersenneforum.org/showpost.php?p=351690&postcount=169[/URL] (c5=107173200) is the best of the C178 so far, with wombatman's [URL]http://www.mersenneforum.org/showpost.php?p=351781&postcount=176[/URL] in second place.
I think we're ready to think about sieving that one; may I propose 4788.5154 [code] 17285154910805941577069464828335617544658066950627644021728302169526833018711670895092479561808160256160945139573800969912234390238908363042669550995167201537635764747005337 [/code]as a next polynomialselection target? It's received an enormous amount of ECM from yoyo@home  I suspect twice as many cycles as will be required for the sieving.[/QUOTE] Probably a little late, but I found another decent poly for the C178. I will move on to the C175 now. [CODE]R0: 11708475821133910619711215298497638 R1: 4937254133390405269 A0: 802914511260679636038420993144191107255975 A1: 374850392085650921707293481711138160 A2: 191485557559647650556457788274 A3: 14221948389865248927299 A4: 1608862500793698 A5: 20456280 skew 11155110.94, size 1.875e017, alpha 7.644, combined = 1.030e013 rroots = 5[/CODE] 
[QUOTE=frmky;352501]Damn! Here's the difference between ECM factoring and ECM pretesting. :smile:
How about 3,697+?[/QUOTE] 2,1135+ (a horrible quartic otherwise) or 10,311+? or 2,1165+? 
Got a brand new one for the C173!
[CODE]polynomial selection complete expected 2.27e013 to > 2.61e013 R0: 699386827836290316401216452717318 R1: 1398262035546010309 A0: 451266459216860959346617888115255632281 A1: 3173082985742773850470626537077816 A2: 3328539108882443923865205317 A3: 5471519931550117209376 A4: 723081393099036 A5: 103296600 skew 2163940.32, size 7.070e017, alpha 7.033, combined = 2.305e013 rroots = 5[/CODE] 
[QUOTE=wombatman;352506]If I might ask a slightly stupid question, does "t55" refer to running the number of curves suggested by GMPECM to find a factor of a given digit length? If so, does "7t55" refer to doing this 7 times? Thanks![/QUOTE]
See [url="http://mersenneforum.org/showthread.php?t=18544"]here[/url] for more discussion on this. 
Excellent! Thanks Jason.

My best one so far for the C173 (I accidentally wrote "C175" in post #198):
[CODE]# norm 7.475154e017 alpha 6.759886 e 2.120e013 rroots 3 skew: 3918982.93 c0: 26415080140414525245706062556030815859200 c1: 6063026331364550036015896404157016 c2: 6283703731829456693443614374 c3: 339518266628882151732 c4: 346537303051239 c5: 51680664 Y0: 803283856170732121581861413596261 Y1: 928056297158488283[/CODE] 
For c173 @ 4788:5154
The one to better is posted [URL="http://mersenneforum.org/showpost.php?p=337945&postcount=2102"]here[/URL]. 
For convenience of viewing:
[CODE]# aq4788:5154 n: 17285154910805941577069464828335617544658066950627644021728302169526833018711670895092479561808160256160945139573800969912234390238908363042669550995167201537635764747005337 skew: 54453878.81 # skew 54453878.81, size 7.891e17, alpha 8.253, combined = 2.438e13 rroots = 5 Y0: 1984194768321507434058802020573021 Y1: 606427190915971723 c0: 17111183463229567250042987271571318169467520 c1: 5954767784533309908917895048435184628 c2: 48133715828031681374056493046 c3: 5548160975795079236918 c4: 18780066010531 c5: 562020[/CODE] 
[QUOTE=wombatman;353027]For convenience of viewing:[/QUOTE]
Thanks [B]wombatman[/B], good idea. If everybody thinks this is a good enough poly, I will start trial sieving for the other parameters shortly. In the meantime, a c162 may be appearing in the near future. :smile: 
I'm certainly no expert, but the 2.305 score I got was easily the highest I ever saw on it. If anybody with more experience thinks it's worth running a little more time on, I can certainly help out with that.

c162
Aliquot Sequence 3408:i1361 is nearing the required ECM curves before switching to GNFS. The last term is found [URL="http://factordb.com/sequences.php?se=1&aq=3408&action=last&fr=0&to=100"]here[/URL] and the composite number is [URL="http://factordb.com/index.php?id=1100000000632503312"]here[/URL].
A nice poly for this c162 would be appreciated. :smile: 
I'll start at 20M. Doc is updated: [url]https://docs.google.com/spreadsheet/ccc?key=0AlFp2DvBLxsUdEtUMFE0bmk3blRQQlJhS2NkcEF2b0E&usp=sharing[/url]

Back from my trip, will get something on the C162, starting at 50M

I'll start tomorrow at 34M.

Here's an initial result from running overnight:
[CODE] 1.08e012 to > 1.24e012 polynomial selection complete R0: 5719863062929676337189207185251 R1: 22460096987790239 A0: 128933402178292703902590771718898699400 A1: 421073405536632376084222761455394 A2: 155681802860956564421675381 A3: 433348276277450790316 A4: 61576771533214 A5: 20120100 skew 1984142.36, size 7.914e016, alpha 7.172, combined = 9.684e013 rroots = 5[/CODE] 
And here's one that's a bit better and is just shy of the expected range:
[CODE]polynomial selection complete R0: 5715761102217972192913663491093 R1: 38563701940403731 A0: 94976223680906906931434983897990109960 A1: 149364296132506421129552303169504 A2: 134587764449354039562561949 A3: 152747267989982653284 A4: 28665952412356 A5: 20192400 skew 1730343.65, size 8.632e016, alpha 7.223, combined = 1.031e012 rroots = 3[/CODE] 
my best so far
[code] polynomial selection complete R0: 4748246644759646013658813337922 R1: 42336859290668713 A0: 619873703155160995956575412212158768955 A1: 1965078609812576922972172806423063 A2: 619922077604339484846192349 A3: 669168682052209515627 A4: 105354140251766 A5: 51037800 skew 2417935.83, size 8.119e016, alpha 7.716, combined = 9.668e013 rroots = 5 [/code] 
I got a 9.53 from my first 24h, so this composite is easy to find 9.59.7 polys for. We've often had one hit (perhaps a couple of polys) score 10% or more higher than our nextbest, so I think 1.051.10 should be considered required on this one.
I think 1520 GPU days is about right for poly select, so I plan to run this one until Thursday morning, or until we produce something > 1.10  whichever comes first. Curtis 
fed up by not finding anything worthy of mention in the 50M ( I poked and hopped around like a running rabbit), I went to the other extreme : the low low leading coeficient
[code] R0: 10864032340092650460535029544092 R1: 32032727516451869 A0: 2275201192054343365143953099059240923817 A1: 3451968411914398551120853008480077 A2: 976773581797898710041933604 A3: 93678023521421115437 A4: 15242981230347 A5: 813960 skew 7832394.35, size 8.233e016, alpha 7.195, combined = 9.917e013 rroots = 5 [/code] the skew mightbe a bit high but it is below 10M 
c162
Thanks for all the work everyone has put in. I ran a test case the old fashion way on a GTX 460 just to see what results would pop out.
[CODE]./msieve g 0 np 9000000,9010000[/CODE] [CODE]polynomial selection complete R0: 6717830615961110570319523537559 R1: 177535805879560157 A0: 3445835305282719729631217061289121337480 A1: 1606749357594607864448001376070484 A2: 2416637138668796096247825314 A3: 157043862458620231949 A4: 201589771126864 A5: 9003540 skew 3734135.90, size 7.198e16, alpha 7.813, combined = 9.003e13 rroots = 5 elapsed time 01:58:45[/CODE] It sounds like Curtis (and I have no idea what I am talking about in this thread) has the ideal situation figured out. Lionel will grab the best poly(s) and take it from here near the end of the week. 
Day 2 produced a 1.06 and a 1.02 for me.
Getting closer... The 1.06 has size norm 1.134 e15 and alpha 8.45. 
I have a 1.06 but with a skew in the 8M
[code] polynomial selection complete R0: 10677947295976653556937379369121 R1: 13890409330094671 A0: 1882698967868741615834498112375619451856 A1: 5449085563849428225995372678979884 A2: 112831863490588729696928604 A3: 169853532031477141371 A4: 1931876922250 A5: 887400 skew 8474905.70, size 9.142e016, alpha 7.493, combined = 1.062e012 rroots = 5 [/code] 
Looks like we might have a really good one!
[CODE]polynomial selection complete R0: 5687267901887849117628057930336 R1: 50226349167486893 A0: 173005616341925019068425420358687999075 A1: 206061802204907426411915664451175 A2: 490930924636416636463430237 A3: 56469426027893147207 A4: 155360648044842 A5: 20703312 skew 1853378.17, size 1.010e015, alpha 7.702, combined = 1.121e012 rroots = 5[/CODE] 
I'll pick up a polynomial for the C162 tonight, the one in post #220 unless something better trickles in.

[code]
R0: 9958502607381088781671996613155 R1: 143221703738676211 A0: 1535697080059276964417309512945888134324 A1: 2485770470444143530176565977570027 A2: 212731110894034066222615065 A3: 137504378769271273383 A4: 6007639133392 A5: 1257732 skew 6402007.49, size 1.053e015, alpha 7.666, combined = 1.164e012 rroots = 5 [/code] not better than #220 but a worthy opponent, right? 
Unless the skew makes it way worse than mine, your score is slightly better. debrouxl, do you test sieve these? If so, could you report back which of the two does better?

[QUOTE=wombatman;353871]Unless the skew makes it way worse than mine, your score is slightly better. debrouxl, do you test sieve these? If so, could you report back which of the two does better?[/QUOTE]
The Escore includes effects of skew, in that higher scores usually sieve better (with the caveat that the score's prediction is + 5% or so; a 1.12 and 1.16 should be testsieved usually, though with NFS@home's firepower it may not be worth the trouble). The catch, as I understand it, with skew is that the skew is a measure of the ratio of the dimensions of the sieve region; a higher skew means the siever works in a narrower rectangle, possibly resulting in the need for more specialq. So, all else equal, we choose lowerskew polys in order to (probably) need fewer specialq, which makes for a lower chance of setbacks or having to exceed the specialq range that sieves well. Since we know higher A5 values produce lower skew, the logic is that we can avoid having to consider this tradeoff overall by just not searching low A5 values. However, it seems pretty common to find a nice poly in those lower values (for reasons I do not know enough to understand). Part of the reason I suggested we lessexperienced folk do months of poly selection for the forum is to try to gain insight into these tradeoffs, and I write things like this in hopes an expert will correct me where I'm mistaken. Even if they do not testsieve these two polys, I will consider doing so to see how it works and the results. 
As someone who last took a math course (with Fourier transforms being the endofcourse material) approximately....5 or 6 years ago, I appreciate your writing out what your reasoning is. I think I understand what you're saying, and I would also be grateful for a betterversed forum member to come in and provide additional info/corrections.
I'll look forward to seeing what your testsieving shows. 
I testsieved the polynomials from post #220 and post #222, and we have a clear winner :smile:
[code]# Post #220: n: 123185130483506137603191442064883489372927504206113226437834768431648754408159660500479519543123321318633171550119649429370954650787254654692369907789231588824311 skew: 1853378.17 c0: 173005616341925019068425420358687999075 c1: 206061802204907426411915664451175 c2: 490930924636416636463430237 c3: 56469426027893147207 c4: 155360648044842 c5: 20703312 Y0: 5687267901887849117628057930336 Y1: 50226349167486893 type: gnfs rlim: 67108863 alim: 67108863 lpbr: 30 lpba: 30 mfbr: 60 mfba: 60 rlambda: 2.6 alambda: 2.6 > Q0=33554431.5, QSTEP=100000. > makeJobFile(): q0=33554431.5, q1=33654431.5. > makeJobFile(): Adjusted to q0=33554431.5, q1=33654431.5. > Lattice sieving algebraic qvalues from q=33554431.5 to 33654431. => "../gnfslasieve4I14e" k o spairs.out v n0 a C162_3408_1361.job gnfslasieve4I14e (with asm64): L1_BITS=15, SVN $Revision: 412 $ FBsize 2062450+0 (deg 5), 3957808+0 (deg 1) total yield: 1573, q=33555283 (0.13668 sec/rel) ^C[/code] Polynomial from post #222: [code]# Post #222 n: 123185130483506137603191442064883489372927504206113226437834768431648754408159660500479519543123321318633171550119649429370954650787254654692369907789231588824311 skew: 6402007.49 c0: 1535697080059276964417309512945888134324 c1: 2485770470444143530176565977570027 c2: 212731110894034066222615065 c3: 137504378769271273383 c4: 6007639133392 c5: 1257732 Y0: 9958502607381088781671996613155 Y1: 143221703738676211 type: gnfs rlim: 67108863 alim: 67108863 lpbr: 30 lpba: 30 mfbr: 60 mfba: 60 rlambda: 2.6 alambda: 2.6 > Q0=33554431.5, QSTEP=100000. > makeJobFile(): q0=33554431.5, q1=33654431.5. > makeJobFile(): Adjusted to q0=33554431.5, q1=33654431.5. > Lattice sieving algebraic qvalues from q=33554431.5 to 33654431. => "../gnfslasieve4I14e" k o spairs.out v n0 a C162_3408_1361.job gnfslasieve4I14e (with asm64): L1_BITS=15, SVN $Revision: 412 $ FBsize 2064657+0 (deg 5), 3957808+0 (deg 1) total yield: 1710, q=33555271 (0.12154 sec/rel) ^C[/code] The 5th degree coefficient of the better polynomial is more than an order of magnitude lower. 
Interesting that the lower C5 gives a better result!

It's all about the combined Escore. VBCurtis post [URL="http://mersenneforum.org/showpost.php?p=353891&postcount=224"]#224[/URL] was very informative, at least to me. :smile:

[QUOTE=VBCurtis;353891]The Escore includes ....[/QUOTE]
+1 :goodposting: 
The testsieve done here (in #226) shows that in this case, a poly with score 4% better sieved ~13% better, at least at this one specialq. Debrouxl was kind enough to post his parameter list, allowing us to compare the polys across the typical expected range of specialq values (according to T Mack, from 1/3rd rlim to rlim).
I claimed + 5% for the Escore's accuracy; in this case, the 1.16 poly performed better than its score, while the 1.12 may have performed worse. Recall the Escore is an integral over the expected sieve region but our actual sieve region may not be the region used by the Escore (right?). If you head over to the Aliqueit forum, you'll find some teamsieve threads, for example [url]http://mersenneforum.org/showthread.php?t=18478[/url]. Those threads have explicit instructions for how to call the siever directly from the command line, without use of factmsieve or yafu. We interested parties should testsieve 0.5k ranges (that's c 500) with f set anywhere from 22M to 67M. If we test at every 5M, we'll get a very detailed picture of the relative performance of these two polys. It's not that we need it for this one instance, but this is a terrific opportunity to learn to use the tools. If you try this, take note of the difference between production per specialq (the number of relations you get out of your c 1000 range) and the production per second reported by lasieve. If my elementary grasp of skew is correct, the better poly will have a lower production per 500 range even while it's better per second. If you try it, post your selected f starting spot, and the time per relation for each poly. 
I may just have to do this overnight...I'll post what I get some time tomorrow!

[QUOTE=VBCurtis;353938]Recall the Escore is an integral over the expected sieve region but our actual sieve region may not be the region used by the Escore (right?).
[/QUOTE] Yes. Further, the E score assumes the sieving region is a continuous block of points and not a lattice like it really is. Even worse, the E value assumes the sieving region is a rectangle that has the same area for all polynomials, and that the factor base bounds are always the same (and always fairly small). A real sieving uses much larger factor base bounds for large problems, and only samples the sieving region looking for lattice points that are more likely to be smooth. You can make the E value more realistic but then the Evalue you get will not be comparable with that of other tools if you're changing parameters for each poly select job. 
My results from running the small test sieve regions overnight:
For Post 220: [CODE][CENTER] Warning: lowering FB_bound to 21999999. total yield: 1207, q=22000501 (0.20302 sec/rel) Warning: lowering FB_bound to 26999999. total yield: 1085, q=27000511 (0.21744 sec/rel) Warning: lowering FB_bound to 31999999. total yield: 957, q=32000513 (0.22304 sec/rel) Warning: lowering FB_bound to 36999999. total yield: 1171, q=37000501 (0.21331 sec/rel) Warning: lowering FB_bound to 41999999. total yield: 971, q=42000503 (0.21170 sec/rel) Warning: lowering FB_bound to 46999999. total yield: 1501, q=47000501 (0.22358 sec/rel) Warning: lowering FB_bound to 51999999. total yield: 657, q=52000517 (0.24030 sec/rel) Warning: lowering FB_bound to 56999999. total yield: 1188, q=57000511 (0.23651 sec/rel) Warning: lowering FB_bound to 61999999. total yield: 1170, q=62000503 (0.24623 sec/rel) Warning: lowering FB_bound to 66999999. total yield: 679, q=67000513 (0.27244 sec/rel)[/CENTER][/CODE] For Post 222: [CODE][CENTER] Warning: lowering FB_bound to 21999999. total yield: 1121, q=22000501 (0.18275 sec/rel) Warning: lowering FB_bound to 26999999. total yield: 1054, q=27000511 (0.18375 sec/rel) Warning: lowering FB_bound to 31999999. total yield: 991, q=32000513 (0.18279 sec/rel) Warning: lowering FB_bound to 36999999. total yield: 968, q=37000501 (0.19983 sec/rel) Warning: lowering FB_bound to 41999999. total yield: 1243, q=42000503 (0.18840 sec/rel) Warning: lowering FB_bound to 46999999. total yield: 1107, q=47000501 (0.20950 sec/rel) Warning: lowering FB_bound to 51999999. total yield: 874, q=52000517 (0.19853 sec/rel) Warning: lowering FB_bound to 56999999. total yield: 1107, q=57000511 (0.20067 sec/rel) Warning: lowering FB_bound to 61999999. total yield: 1455, q=62000503 (0.22372 sec/rel) Warning: lowering FB_bound to 66999999. total yield: 1014, q=67000513 (0.22200 sec/rel)[/CENTER][/CODE] This was using 32bit I14e siever (1 thread) on an AMD Phenom II X4. Post 222 is definitely better across the whole range. Very cool to understand how to do that now. 
Well, that means my grasp of skew is mistaken the lower poly finds more relations during the series of trials than the upper poly (roughly 10,800 to 10,500).
Thanks for posting data! I think when I begin doing GNFS150 projects, I'll sample three specialq ranges. 
Ranges of 500 looks a little small. You would get much better results with larger ranges.

Do you mean too small as in it does not provide an accurate representation of the region? If so, what would you recommend? I used 500 just to do a quick check. For actual test sieving, I would use a range of something like 5,000 or 10,000, I think.

To do a proper job of test sieving:
 pick your parameters and derive the number of relations X that those parameters would require to construct a matrix  pick the expected specialQ range  time how long it takes to sieve 1/1000 of the specialQ in that range  compute X / (relations found) * (time needed to sieve) This last item is the real figure of merit that we're trying to minimize. Of course that's a ton of tedious work, so only very large problems would benefit. Step 3 is necessary to catch the polynomials that start off fast but poop out as the specialq increase in size. 
c166
Another Aliquot Sequence [URL="http://factordb.com/sequences.php?se=1&aq=4788&action=last"]4788[/URL] is approaching NFS ready state. A big [URL="http://factordb.com/index.php?id=1100000000633948129"]c166[/URL] remains in the way. Still plenty of ECM to do but this poly will also require a bit of time.
Any takers? 
As usual, i'll put my 560 on it.

I'll get on it as well.

Yes, of course. We three soldiers respond to any summons from aliqueit or rednamed posters. One of these days, we'll among us develop a sense of when a poly is 'good enough' compared to expectations....
Curtis 
expecting poly E from 5.49e013 to > 6.31e013
nothing good yet, 4.8e13 in the 5M range, I try now in the 1516M 
[URL="http://www.empowernetwork.com/BandFlea/files/2013/01/threeamigos.jpg"]Something like this, I imagine.[/URL]

[QUOTE=wombatman;354718][URL="http://www.empowernetwork.com/BandFlea/files/2013/01/threeamigos.jpg"]Something like this, I imagine.[/URL][/QUOTE]
I might be Steve Martin's height! Though I'm as unfunny as Chevy, given my students' reactions this week. 
4788:C166
Found a 5.38:
[code]# norm 3.534603e016 alpha 7.217812 e 5.388e013 rroots 5 skew: 2475815.23 c0: 406863124232228097770612251596630707808 c1: 1397008484147485063276690890434320 c2: 1959559479994289870016918286 c3: 223873981131455264775 c4: 309981426719886 c5: 34871760 Y0: 47273014953449373402503531152735 Y1: 1730317388101909[/code] 
Beats the best I've found so far (5.18). Also, I'm finding that running np1 on the GPU and then running nps npr separately is screaming fast.

Best so far is 5.14...
[code] R0: 69499481355460328998078506311622 R1: 465404033160944009 A0: 893054565437073468528453235950677733085 A1: 2511797614938907011950743089188765 A2: 2719192542874456010358349743 A3: 460941393242470744181 A4: 126820984786298 A5: 5077296 skew 4491515.30, size 2.719e016, alpha 7.152, combined = 5.143e013 rroots = 5 [/code] 
another one, pretty close to VBCurits's
[code] R0: 33117504503117161374368743175645 R1: 289745110476613783 A0: 198597517620071762000877347570258564216 A1: 739534886795844444800384897768926 A2: 366246626185683843435072111 A3: 1046052875115274054676 A4: 300988932539270 A5: 206658060 skew 1317757.94, size 2.912e016, alpha 6.998, combined = 5.374e013 rroots = 5 [/code] 
Here's my best so far too:
[CODE]# norm 3.417502e016 alpha 7.426297 e 5.252e013 rroots 3 skew: 3383610.93 c0: 1333763129505261875407710058754044688061 c1: 3239404989985241933010889120382208 c2: 3066193050854242164741724486 c3: 90024775430248566173 c4: 291653453627960 c5: 20639700 Y0: 52500980393204885858075170563604 Y1: 630446021530596293[/CODE] 
C166 for RichD
Managed to get a 5.39:
[CODE]# norm 3.514670e016 alpha 7.111533 e 5.394e013 rroots 3 skew: 3488617.93 c0: 1532190310404755024435433175347737566139 c1: 3676253136046456933250836564000824 c2: 81541743881566127057867159 c3: 580564695976550014656 c4: 32579659023370 c5: 20850600 Y0: 52394338640184089981223380179140 Y1: 257044554902959283[/CODE] 
Well, we 3 found quite equivalent poly... now to get an excepted score...

c166
Thanks for all the help. I believe [B]debrouxl[/B] will take the poly to NFS@Home when they are ready, as stated in this [URL="http://mersenneforum.org/showpost.php?p=354634&postcount=2134"]post[/URL].
I'm finishing up the last few curves in the next day or so. 
[url=http://www.mersenneforum.org/showpost.php?p=355256&postcount=920]Requesting a poly for C168_127_110[/url] to be sieved by NFS@Home if you guys are willing. This will be a record GNFS for the xyyxf project. Thanks in advance!

Definitely willinggot a little more time on the C166 for RichD first, but you'll be up next.

I can't get anything remotly close to to my best score (5.37) for the 4788 sequence.
I will start looking at yours, swellman. expecting poly E from 4.27e013 to > 4.91e013 
Thank you all.

So... For the Aliquot C166, it looks like the polynomials with best E value are:
[url]http://www.mersenneforum.org/showpost.php?p=354887&postcount=245[/url] [url]http://www.mersenneforum.org/showpost.php?p=355071&postcount=248[/url] [url]http://www.mersenneforum.org/showpost.php?p=355154&postcount=250[/url] All three leading coefficients are high, though :unsure: Last time (post #226), the polynomial with a leading coefficient in the 1M range largely beat the polynomial of similar E value with a leading coefficient in the 20% range. 
Post #247 ([url]http://mersenneforum.org/showpost.php?p=354958&postcount=247[/url]) has a leading coefficient in the 5M range, but with a slightly lower score. Maybe that would be worth checking out?

Debrouxl, i'll try with a very low LC for a few hours, in the 1M range.
As for swellman's C168, I have [code] n=518759670509518390499884894142825232305789370205934770356684820953606669616234831388561087386018771667622991938328056602692240129084683654895676978808741395629768407883 R0: 105317384837345063824645696506111 R1: 76838901016397327 A0: 2927567441266009430694267586261766169520 A1: 25173374174114924141817443838204354 A2: 21627216241283611979321015885 A3: 4512694379714443247124 A4: 799777113123900 A5: 40037400 skew 4653959.25, size 1.392e016, alpha 7.562, combined = 3.362e013 rroots = 5 [/code] 
On the 4788.C166 front , I have one, the skew is slightly above 11M, but the score is *equivalent* to the 3 we offerred
[code] R0: 78764596806719279335948607498706 R1: 704739813349024603 A0: 64097545582231836173154185834090974313893 A1: 57038273747787160306728671709353557 A2: 11922305817556816403635714269 A3: 867444673665392023597 A4: 78372426411442 A5: 2715720 skew 11681526.87, size 2.887e016, alpha 8.163, combined = 5.314e013 rroots = 3 [/code] 
I've started at 10M on the C168 for swellman since firejuggler's taking care of the low coefficient search for the C166.

[QUOTE=debrouxl;355332]So... For the Aliquot C166, it looks like the polynomials with best E value are:
[url]http://www.mersenneforum.org/showpost.php?p=354887&postcount=245[/url] [url]http://www.mersenneforum.org/showpost.php?p=355071&postcount=248[/url] [url]http://www.mersenneforum.org/showpost.php?p=355154&postcount=250[/url] All three leading coefficients are high, though :unsure: Last time (post #226), the polynomial with a leading coefficient in the 1M range largely beat the polynomial of similar E value with a leading coefficient in the 20% range.[/QUOTE] Everything I have read on these forums indicates that low A5 coefficients are to be avoided, as the polys are no better (on average) than higher A5, but the skew is higher and the search takes longer per coeff than 8digit A5s. Do you have something that counters these two items? 
It may be coincidential but the poly with a leading coef of 1.2M beat one with a LC of 20 M (20% faster), while the score was equivalent (within 5%). So debrouxl want to test it again.

Ah, I see. So we'll either have a sample size of 2 to indicate lower A5 values may perform better (and thus an interest in further testing), or we'll see it was a coincidence.
Two more days produced no better than 5.24. I am running a poly search for my own work today, but can continue this c166 SundayMonday if there is interest & patience. Curtis 
If we do something similar on this C168 for swellman, we can check it for a 3rd time.

Best so far for the C168:
[CODE]# norm 2.054099e016 alpha 8.201051 e 3.712e013 rroots 5 skew: 13547506.24 c0: 1091049838048076230236436281364364658189135 c1: 504603769573285498246083360215541735 c2: 38768299583337983933127496777 c3: 5351429028937713649315 c4: 100138622851304 c5: 10001628 Y0: 138988572401982466384760027594008 Y1: 1199975415967612259[/CODE] 
So far on the C168
[code] R0: 80132560431598622469656914926773 R1: 341271651656533139 A0: 3925530804863703095840240389104558480 A1: 86410068762700275288821820483762 A2: 706298516970652533739095304 A3: 26497098487745183679 A4: 898254565334150 A5: 157007760 skew 842687.75, size 1.555e016, alpha 5.723, combined = 3.666e013 rroots = 5 [/code] 
Very slight improvement on the score:
[CODE]# norm 2.052260e016 alpha 7.555741 e 3.881e013 rroots 3 skew: 5385621.82 c0: 6257192400192652225441167271091421245545 c1: 3945606052772085028778758471635594 c2: 9308774563991543714210240940 c3: 341618027604120987286 c4: 150607283841495 c5: 10067400 Y0: 138806492519608695178613827452974 Y1: 424596711833392849[/CODE] 
C166 for RichD
Found one with a bad skew but higher score:
# norm 3.695902e016 alpha 7.383306 e 5.567e013 rroots 5 skew: 13474622.03 c0: 33210626476730825697882762404632692697704 c1: 26333818512286988595139568234039746 c2: 1058318687962661120245321539 c3: 482935203367991955378 c4: 1785076260818 c5: 613872 Y0: 106045564800808526376402542337169 Y1: 133157835424250393 
So... here are the results of my belated sieving tests on "C166_4788_5159". In ascending order of leading coefficient:
[code]# norm 3.695902e016 alpha 7.383306 e 5.567e013 rroots 5 n: 8232663677075268552040028040427962613195004187737038472546008350527350790146813693136042827763089666937337785875075123500306235229888683807658826159126375359853768383 deg: 5 c0: 33210626476730825697882762404632692697704 c1: 26333818512286988595139568234039746 c2: 1058318687962661120245321539 c3: 482935203367991955378 c4: 1785076260818 c5: 613872 Y0: 106045564800808526376402542337169 Y1: 133157835424250393 type: gnfs skew: 13474622.03 rlim: 134217727 alim: 134217727 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.6 alambda: 2.6 qintsize: 1000 # http://www.mersenneforum.org/showpost.php?p=355359&postcount=269 #> makeJobFile(): q0=67108863.5, q1=67208863.5. [b]# total yield: 2151, q=67109923 (0.15340 sec/rel)[/b] # 53 Special q, 134 reduction iterations [/code] [code]# skew 11681526.87, size 2.887e016, alpha 8.163, combined = 5.314e013, rroots = 3 n: 8232663677075268552040028040427962613195004187737038472546008350527350790146813693136042827763089666937337785875075123500306235229888683807658826159126375359853768383 deg: 5 c0: 64097545582231836173154185834090974313893 c1: 57038273747787160306728671709353557 c2: 11922305817556816403635714269 c3: 867444673665392023597 c4: 78372426411442 c5: 2715720 Y0: 78764596806719279335948607498706 Y1: 704739813349024603 type: gnfs skew: 11681526.87 rlim: 134217727 alim: 134217727 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.6 alambda: 2.6 qintsize: 1000 # http://www.mersenneforum.org/showpost.php?p=355359&postcount=260 #> makeJobFile(): q0=67108863.5, q1=67208863.5. [b]# total yield: 2356, q=67109923 (0.16306 sec/rel)[/b] # 62 Special q, 152 reduction iterations [/code] [code]# skew 4491515.30, size 2.719e016, alpha 7.152, combined = 5.143e013 rroots = 5 n: 8232663677075268552040028040427962613195004187737038472546008350527350790146813693136042827763089666937337785875075123500306235229888683807658826159126375359853768383 deg: 5 c0: 893054565437073468528453235950677733085 c1: 2511797614938907011950743089188765 c2: 2719192542874456010358349743 c3: 460941393242470744181 c4: 126820984786298 c5: 5077296 Y0: 69499481355460328998078506311622 Y1: 465404033160944009 type: gnfs skew: 4491515.30 rlim: 134217727 alim: 134217727 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.6 alambda: 2.6 qintsize: 1000 # http://www.mersenneforum.org/showpost.php?p=355359&postcount=247 #> makeJobFile(): q0=67108863.5, q1=67208863.5. # total yield: 2032, q=67109923 (0.17590 sec/rel) # 58 Special q, 150 reduction iterations [/code] [code]# norm 3.514670e016 alpha 7.111533 e 5.394e013 rroots 3 n: 8232663677075268552040028040427962613195004187737038472546008350527350790146813693136042827763089666937337785875075123500306235229888683807658826159126375359853768383 deg: 5 c0: 1532190310404755024435433175347737566139 c1: 3676253136046456933250836564000824 c2: 81541743881566127057867159 c3: 580564695976550014656 c4: 32579659023370 c5: 20850600 Y0: 52394338640184089981223380179140 Y1: 257044554902959283 type: gnfs skew: 3488617.93 rlim: 134217727 alim: 134217727 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.6 alambda: 2.6 qintsize: 1000 # http://www.mersenneforum.org/showpost.php?p=355359&postcount=250 #> makeJobFile(): q0=67108863.5, q1=67208863.5. # total yield: 2100, q=67109923 (0.16070 sec/rel) # 55 Special q, 150 reduction iterations [/code] [code]# norm 3.534603e016 alpha 7.217812 e 5.388e013 rroots 5 n: 8232663677075268552040028040427962613195004187737038472546008350527350790146813693136042827763089666937337785875075123500306235229888683807658826159126375359853768383 deg: 5 c0: 406863124232228097770612251596630707808 c1: 1397008484147485063276690890434320 c2: 1959559479994289870016918286 c3: 223873981131455264775 c4: 309981426719886 c5: 34871760 Y0: 47273014953449373402503531152735 Y1: 1730317388101909 type: gnfs skew: 2475815.23 rlim: 134217727 alim: 134217727 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.6 alambda: 2.6 qintsize: 1000 # http://www.mersenneforum.org/showpost.php?p=355359&postcount=245 #> makeJobFile(): q0=67108863.5, q1=67208863.5. # total yield: 2016, q=67109923 (0.14132 sec/rel) # 55 Special q, 161 reduction iterations [/code] [code]# skew 1317757.94, size 2.912e016, alpha 6.998, combined = 5.374e013 rroots = 5 n: 8232663677075268552040028040427962613195004187737038472546008350527350790146813693136042827763089666937337785875075123500306235229888683807658826159126375359853768383 deg: 5 c0: 198597517620071762000877347570258564216 c1: 739534886795844444800384897768926 c2: 366246626185683843435072111 c3: 1046052875115274054676 c4: 300988932539270 c5: 206658060 Y0: 33117504503117161374368743175645 Y1: 289745110476613783 type: gnfs skew: 1317757.94 rlim: 134217727 alim: 134217727 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.6 alambda: 2.6 qintsize: 1000 # http://www.mersenneforum.org/showpost.php?p=355359&postcount=248 #> makeJobFile(): q0=67108863.5, q1=67208863.5. # total yield: 1836, q=67109923 (0.16997 sec/rel) # 51 Special q, 152 reduction iterations [/code] The winners are, this time again, the two polynomials with the lowest leading coefficients... * the first one is faster than the second one but sieves less productively; * the fifth one sieves faster than the first and second ones, but less productively than either, so we'd have to sieve over a wider range; * polynomials 3, 4 and 6 sieve both more slowly and less productively than polynomial 2. The one with the highest leading coefficient is the worst... My commentary: while it remains true that the extremely low leading coefficients (up to several thousands, maybe several dozen thousands) seldom yield the best polynomials, we already knew that high leading coefficients (dozens of millions, for a number of this difficulty) are bad. The older pol5sel quickly increases the leading coefficient, and it produces polynomials which sieve noticeably less productively than msieveproduced polynomials. I think that [i]for numbers of that size[/i], with runofthemill polsel code, looking for polynomials with a leading coefficient above several dozen millions is a waste of electrons :smile: Were all scores computed with the same version of msieve, BTW ? 
Currently the scores msieve produces are generated with a certain set of parameters(the same as pol5 from memory). Would higher(closer to reality) parameters on the best few polys provide a better idea of which is best?

C166 for RichD
[QUOTE=debrouxl;355478]
Were all scores computed with the same version of msieve, BTW ?[/QUOTE] One more poly to ckeck, it might sieve better :) msieve v.1.52 (svn942) # norm 3.682170e016 alpha 7.976362 e 5.494e013 rroots 5 skew: 11610079.61 c0: 9847862849022242800155773023665624456000 c1: 16301817747869811410359170706479000 c2: 7452362045018712127753989554 c3: 742047366030036453403 c4: 50776706006862 c5: 673200 Y0: 104106839917340321574536010026327 Y1: 169908656891663867 
Swellman's C168
[code] R0: 131875157916698331712033954516273 R1: 143947380856205903 A0: 10863135549907575815531071201377087481440 A1: 3680619155268645580126871328996976 A2: 4101959133382108448147843462 A3: 715128181658682572315 A4: 353098812987408 A5: 13006224 skew 3934944.07, size 1.931e016, alpha 7.392, combined = 4.170e013 rroots = 5 [/code] 
debrouxl, thanks for running through all of thosethis is very helpful information (at least for me). My scores were produced from MSieve 1.52 (svn 944).
sashamkrt, welcome to the poly sieving group! 
[QUOTE=debrouxl;355478]So... here are the results of my belated sieving tests on "C166_4788_5159". In ascending order of leading coefficient:
[code]# norm 3.695902e016 alpha 7.383306 e 5.567e013 rroots 5 [/code] [code]# skew 11681526.87, size 2.887e016, alpha 8.163, combined = 5.314e013, rroots = 3 [/code] [/QUOTE] Could the yield/speed difference between #1 and #2 be explained by the e score difference? 
Why are you using anything other than sec/rel to judge a poly? Isn't the point of finding a good poly to find one that takes less time to perform the factorization?
#5 performs quite a lot faster than #1 or #2. What am I missing here? Also, as firejuggler pointed out, comparisons are not made vs Escore here #1 has a 5% higher score, but sieves ~6% faster. However, #5 scores 4% worse than #1 while sieving 4% faster. 
If none of the other parameters are changed, then seconds per relation is the correct measure. If you optimize the other parameters (e.g. factor base limits or specialq range) separately for each polynomial, then you have to use the total estimated sieving time.

In a perfect world, sec/rel would seem reasonable. There is an overhead in starting each range, something around 15 seconds before the first relation is recorded. That's why it is better to take a few million Qs per core at a time. With the relatively small work units (WU) of NFS@Home I believe [B]debrouxl[/B] is trying to balance the tradeoff. I usually look for yield ratio to minimize the WUs. (Assuming all other parameters are the same.) Perhaps [B]debrouxl[/B] has more to say because he has been doing this longer than me. :smile:

New best from my search for the C168:
[CODE]polynomial selection complete R0: 137982521622728045011243408554463 R1: 1339655601011561771 A0: 42456904591149864315136445197306375920560 A1: 30670859550417145194573777446486006 A2: 7650991139046352760036346229 A3: 1045673741543574747132 A4: 146948730229980 A5: 10371600 skew 7295400.25, size 1.914e016, alpha 7.572, combined = 4.147e013 rroots = 5[/CODE] And a 2nd one that's not quite as good: [CODE]# norm 2.121910e016 alpha 7.656116 e 3.927e013 rroots 5 skew: 7644763.78 c0: 71758833171721317865765888880885183510775 c1: 49536046266198119164296136514381007 c2: 90590297181877360519419115 c3: 2765515707324151868651 c4: 68394972845396 c5: 10119900 Y0: 138662170971500796224221234208258 Y1: 347372753308990319[/CODE] 
For the C166, #5 did indeed sieve ~13.3% faster than #2, but produced ~14.4% fewer relations on the range of 1K relations, so it's not significantly better or worse.

C168 poly
[CODE]
n: 518759670509518390499884894142825232305789370205934770356684820953606669616234831388561087386018771667622991938328056602692240129084683654895676978808741395629768407883 # norm 2.390778e016 alpha 7.157296 e 4.234e013 rroots 5 skew: 19835022.68 c0: 14766991706078551413874024151190708222720 c1: 16481308623853322795154573138777216 c2: 8711799723929349110308763068 c3: 203660963131057337068 c4: 19625244765905 c5: 108528 Y0: 343466912683455792965746434341297 Y1: 550334592933944653 [/CODE] 
ะก168 polynoms with good scores and not so good skews
[CODE]
# norm 2.880607e016 alpha 8.201338 e 4.765e013 rroots 5 skew: 89100677.43 c0: 27857995650530594214542724173490926656056000 c1: 489092174862742972528340289017184040 c2: 30770925245570759449861491874 c3: 51337377580050003917 c4: 4407409358582 c5: 10200 Y0: 551154318026277087308020573543333 Y1: 109459496504263717 # norm 2.578144e016 alpha 7.900994 e 4.453e013 rroots 5 skew: 89152497.21 c0: 25294409365820382440286805545396176373535995 c1: 542383637474412430010304925218961429 c2: 30617383334984154264021694081 c3: 66563245603254143525 c4: 4363140797582 c5: 10200 Y0: 551154317931265040287858120286446 Y1: 109459496504263717 [/CODE] 
Wow! Very nice finds!

[QUOTE=debrouxl;355575]For the C166, #5 did indeed sieve ~13.3% faster than #2, but produced ~14.4% fewer relations on the range of 1K relations, so it's not significantly better or worse.[/QUOTE]
Can you help me understand why you care about how many specialQ you need to search? I believe you're saying #5 will need 14% (or 100/86, which is more than 14%) more specialQ searched in order to produce the required number of relations, but even after taking that extra 14% into account that it will finish 13% faster than #2. The sec/rel is time per relation found, NOT time per specialQ searched. It looks to me like you're confusing the two. The only time I see this having importance is when we're already stretching a version of the siever, and might run out of specialQ. That's not nearly the case here, is it? 
C168 poly
[CODE]
# norm 2.790210e016 alpha 6.156376 e 4.506e013 rroots 3 skew: 12473534.14 c0: 516236450022905700097829298234459693015 c1: 1155256345148460315833790376537509 c2: 801505740961731706078779054 c3: 187904124865251487543 c4: 5004361126321 c5: 21240 Y0: 475951112043062447304070700973752 Y1: 167570844707882773 [/CODE] 
[QUOTE=VBCurtis;355702]Can you help me understand ...[/QUOTE]
He says that in the same period of time, one guy runs 100 meters throwing out into the public one dollar at every 10 meters he runs, and the second guy runs 133 meters (13.3 % faster) but spitting one dollar every 14.6 meters (14.6% slower). At the end, the public collects 10 or 11 dollars from the first guy (depending where the thrown out the first dollar), and 10 or 11 from the second (again, depending on the luck), so there is no relevant comparison between the productivity of the two guys. 
another low expo LD. will try now in a reasonnable range (12 to 17M)
[code] R0: 174549336556401481202069007537515 R1: 440721410024969593 A0: 281491614847856620351679317663874215083264 A1: 52465697511380707542888318448808376 A2: 12109969047207747484463577258 A3: 316512889878104730031 A4: 48637872015544 A5: 3201660 skew 11587470.20, size 1.937e016, alpha 8.325, combined = 4.227e013 rroots =3 [/code] 
[QUOTE=LaurV;355717]He says that in the same period of time, one guy runs 100 meters throwing out into the public one dollar at every 10 meters he runs, and the second guy runs 133 meters (13.3 % faster) but spitting one dollar every 14.6 meters (14.6% slower). At the end, the public collects 10 or 11 dollars from the first guy (depending where the thrown out the first dollar), and 10 or 11 from the second (again, depending on the luck), so there is no relevant comparison between the productivity of the two guys.[/QUOTE]
This is what I thought he was trying to say. Here's the problem: The sec/rel is seconds per relation, NOT seconds per Qtested. My guy spits out more dollars per second, even though the dollars per meter measure is worse. Why do we care how many meters he covers, if we collect the money 13% faster? It takes my guy 14.6% more meters to get a dollar, but he runs 28% faster (roughly), so my guy throws dollars out 13% more often than the other guy. My point is that this is clearly better, and that you two are confusing sec/rel with sec/Q. Sec/rel is how often we get dollars, period. As long as the number of relations required for the two polys match (with all parameters equal, we should assume this), that is. 
How do duplicate relations fit into the above scenarios?

adverse wind?

[QUOTE=EdH;356125]How do duplicate relations fit into the above scenarios?[/QUOTE]
I am not sure, but my wild guess is that searching more Q might lead to more duplicates, which might require us to find yet more relations in order to build a matrix. However, we're talking about perhaps a singledigit percentage of add'l dups, when dups are a singledigit percentage of total rels found. So any effect would be on the order of 1% of extra sieve effort needed. That leads us to ignore the effect. 
All times are UTC. The time now is 13:07. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2021, Jelsoft Enterprises Ltd.