C158
1107 @ 11e7  nothing.

I sent a note to [B]debrouxl[/B] a couple weeks ago but he must be away.
I went ahead and posted a team sieve [URL="http://mersenneforum.org/showthread.php?t=19227"]thread[/URL] to keep things moving. 
[CODE]GMPECM 7.0dev [configured with GMP 6.0.0, enableasmredc, enablegpu, enableassert, enableopenmp] [ECM]
Input number is 34287129936917587345356146795813041353257817538185445350525573912534386065900174396694157805853284101661601293564239230423921585957629908897256542967 (149 digits) Using B1=3000000, B2=13134672, sigma=3:12347516063:1234752373 (768 curves) Computing 768 Step 1 took 26679ms of CPU time / [COLOR="DarkRed"]970513ms of GPU time[/COLOR] ********** Factor found in step 1: 297800272640956791407736751931185667 Found prime factor of 36 digits: 297800272640956791407736751931185667 Composite cofactor 115134649249485077644761201247390716628944485597564745943699149297894890143441578090968579719123054326250327771901 has 114 digits [/CODE] 
C177 @ i5216
2^3 * 3 * ... * C177
pm1 @ 3e9  no factor 
C177
1000 @ 11e6  no factor.

Don't run small curves (smaller than B1=110e6) on this one.
8000 x 110e6 and 3000 x 260e6 are done. 
C177
1200 @ 260e6  no factor.
Passing through 250 @ 850e6. 
Someone have broken c177 into p67*p100, now c182 on i5220. Another good news  we got rid of the 2^3*3 driver.

Beautiful! Who did it?
This will go down now, if no 31 on the horizon. 
[QUOTE=unconnected;372172]Someone have broken c[B]177[/B] into p[B]67[/B]*p[B]100[/B], now c182 on i5220. Another good news  we got rid of the 2^3*3 driver.[/QUOTE]
That must have been quite a feat. 
Could have been ryanp.
If this is the case, then most likely, c182 is already properly ECMd. I'll throw 3000 110e6 curves on it just in case. 
Correct. I asked Ryan if he had a little time to perform the GNFS on both A3408 and this one. I also said, in the tradition of the forum, he who performs the postprocessing gets to run the first set of ECM curves.
He must have some down time so I am letting him play. :smile: 
Ryan is on fire  he cracked the c182, too. ;}

C186 @ i5232
The 3 is gone at index 5232. GO RYAN!!
2^4 * 11 * ... * C186 
We haven't received any updates in a while. Does anybody know what amount of ECM work has been done, if any?

It is still being worked.
You must realize the c186 is a monumental task. 
Absolutely I do. Was just looking for an update. When you don't hear any news, it's hard to tell if it is still chugging along silently or if it has been abandoned.

C186 @ i5232
There has been a miscommunication who or what was being performed on this sequence. As it stands now it is available for work by forum members at i5232.
2^4 * 11 * ... * C186. In the coming days I will start ECM work around t50 or t55 not knowing what has already been done. 
C186
2000 @ 43e6  no factor
pm1 @ 2e10  no factor passing through 1300 @ 11e7 
C186
passing through 7400 @ 11e7.

C186
8100 @ 11e7  no factor.
passing through 2000 @ 26e7 
I'll run some curves  maybe around 425e6? (t60 is 260e6, t65 is 850e6)

[QUOTE=Dubslow;386045]I'll run some curves  maybe around 425e6? (t60 is 260e6, t65 is 850e6)[/QUOTE]
That B1 is close to the changeover from k=6 using less memory vs k=2 using more memory. If B2 is 4.7e12, that's k=6; try running a curve with flag k 2 to force the highermemory condition, and see if expected time for a t60 improves. Or you could try B1=400M, which I'm sure is k=6, and compare to B1=450M which should be k=2, and report which is best expected time to complete a t60. I think the best choice depends on hardware architecture rather than ECM itself. 
[QUOTE=VBCurtis;386072]That B1 is close to the changeover from k=6 using less memory vs k=2 using more memory.
If B2 is 4.7e12, that's k=6; try running a curve with flag k 2 to force the highermemory condition, and see if expected time for a t60 improves. Or you could try B1=400M, which I'm sure is k=6, and compare to B1=450M which should be k=2, and report which is best expected time to complete a t60. I think the best choice depends on hardware architecture rather than ECM itself.[/QUOTE] Errr... what? You're over my head here. :smile: I just thought I'd try something different. I'm also using yafu to drive 6 threads (simply due to familiarity and I'm not certain I have pyecm anywhere on my system (though it's quite possible)). How does one check expected time to t<whatever> with `ecm`? 
The v flag will give you stats for whatever B1 you have chosen; specifically, expected curves to complete a level when you start the run, and expected time to complete the level when you finish the curve. You'll also see "k=2" or similar in the stats at the outset of a run while using v. These stats allow you to see how many curves at 425M produces a t60.
So, my intent was for you to try "ecm v 425e6 <inputfile.txt", then "ecm k 2 v 425e6 <inputfile.txt", and see which gives the shorter expected time to completion. However, I just tried that B1, and it is just above the transition to a larger memory use, meaning it already uses k = 2 (that is, it takes two passes to complete stage 2 while using more memory, where B1=400M takes 6 passes using half the memory). So, never mind you picked a good B1, where my suggestion is irrelevant and default settings are very likely to be fast. Curtis 
C186
passing through 5600 @ 26e7.

I start 10000 curves @26e7 now.

nearly 70% done so I schedule another 10000 curves @26e7.
I assume we need 42000 curves. 
[QUOTE=yoyo;386487]nearly 70% done so I schedule another 10000 curves @26e7.
I assume we need 42000 curves.[/QUOTE] 1/2*t60 should be enough. 
I did ~500 curves at 425e6. I'm not entirely sure on the count, it's probably no more than 10% accurate (and probably a bit on the high side). I've stopped since it's apparent my contribution is negligible.

[QUOTE=prgamma10;386594]1/2*t60 should be enough.[/QUOTE]
But a full t55 hasn’t been performed. Does that make a difference? I was thinking (guessing) a full t60 would be needed. I haven’t worked with a number this large, so I don’t have a good feel. Which bring up another set of comments. If we progress into GGNFS, I think a good poly can be found with a couple GPUs in a few weeks. Team sieving would take (at least) several months depending on the number of cores. Is there a machine out there that can handle the LA phase in a reasonable amount of time? Something with at least 24 or more cores… Someone with more experience care to comment? (Help!) passing through 6900 @ 26e7. I did split off a core a while back and it has 200 @ 85e7. 
Rich
Half a t60 *is* 3t55. It doesn't matter what lower levels have or have not been done. Or, the 260M & 425M curves done so far are more than a t55, so one has been done! 
Thanks Curtis. I really wasn't questioning [B]prgamma10[/B] numbers, but more if we have the power to tackle the C186 with just forum member's PCs.
I'll be wrapping my work up later today. 
This a big job, but I'd be willing to help with sieving if ECM strikes out.
I normally use Yafu, but I believe the .dat files are perfectly compatible with pure Msieve results. Really, really hoping ECM hits. 
NFS@Home might be willing to help with it.

I think a forumorganized NFS project is overdue, so I'm willing to help both with poly select and some sieving.

[QUOTE=VBCurtis;386650]I think a forumorganized NFS project is overdue, so I'm willing to help both with poly select and some sieving.[/QUOTE]
[URL]http://mersenneforum.org/showthread.php?t=19711[/URL] is already one forumorganized NFS project in progress. 
[QUOTE=swellman;386646]
I normally use Yafu, but I believe the .dat files are perfectly compatible with pure Msieve results. [/QUOTE] Neither Yafu nor Msieve are capable of sieving.* They each delegate to external sievers, in the typical forum case the gnfslasieve collection. Since it's the same siever, the results format is of course the same. As a bonus, Yafu isn't even capable of postprocessing  it just uses Msieve's code (which is why Msieve is a compile time dependency for NFS).** [QUOTE=Batalov;386659][URL]http://mersenneforum.org/showthread.php?t=19711[/URL] is already one forumorganized NFS project in progress.[/QUOTE] Not that it's getting anywhere :razz: [SIZE="1"] * Msieve technically has a line siever, but its use in place of an optimized lattice siever like gnfslasieve is morbidly inefficient and a waste of resources. ** Yafu's SIQS code has a copyandpasted older form of Msieve's postprocessing as part of its own source, which is why Msieve isn't a dependency for withoutNFS builds.[/SIZE] 
Batalov
Except that nobody has decided upon parameters for M991, nor begun sieving. I tried I posted a baseline parameter set for feedback, but nobody with experience on that size posted improvements. I don't think any consensus was reached for M991 on whether 33bit LPs were enough, and thus whether the Win64 siever is sufficient for M991. Is there a 64bit siever binary available with the 33bit limit removed? I know NFS@Home uses one in a Boinc wrapper, but I have not found the freestanding binary. 
Yes, I know. I am only trying to say that two concurrent projects will have a worse fate than one.

1 Attachment(s)
[QUOTE=VBCurtis;386661]
Is there a 64bit siever binary available with the 33bit limit removed? I know NFS@Home uses one in a Boinc wrapper, but I have not found the freestanding binary.[/QUOTE] Do you want to try the file in the attachment? It is the 64bit binary from NFS@Home. Please let me know if it works. 
Carlos
Thanks for trying. This binary produces a file "stderr" that has a series of error outputs, such as "can't open init data file  running in standalone mode" Followed by "boinc initialized work files resolved, now working" followed by "Cannot open input_data for input of nfs polynomials: no such file or directory" I called the binary by renaming it to my usual 16e siever and running the python script for 2,991. The script did work, producing the usual .fb file and job files, etc. So, it appears this binary is the one modified to work with NFS@ home's BOINC wrapper. 
I'm going to send an email to Greg to see if he can share the original one.
Edit: Just checked my email and I have the linux binaries. [url]https://cld.pt/dl/download/bc833ab0e354464e83d0a1305e5402dc/lasieve5.tar.gz[/url] 
Final Counts
7186 @ 26e7  no factor
222 @ 85e7  no factor I see yoyo is passing through 21K @ 26e7. I'll put this on hold until the M991 project completes. 
A C186 GNFS task is far above the reach of 14e, so Greg is the one to convince for queuing to NFS@Home :smile:

C186 @ i5232
I did a little preliminary work on poly selection and found the following:
I ran three intervals of leading coefficients. None of which produced the expected results. (i.e., not worth posting the poly.) Baseline: expecting poly E from 3.55e14 to > 4.09e14 Leading coefficient & scores. [CODE]1.21.3e6 skew 103308316.47, size 2.479e18, alpha 7.147, combined = 2.977e14 rroots = 5 2.72.8e6 skew 106652921.92, size 2.353e18, alpha 8.691, combined = 2.825e14 rroots = 3 3.63.8e6 skew 21384286.59, size 2.157e18, alpha 7.678, combined = 2.677e14 rroots = 3[/CODE] It seems to get worse as the lead coefficient increases. I wonder if it would be better to search below 1M? 
I think the anticipatedE in that range is a little high; the C187 we did on the forum four years ago (cofactor of 2^956+1) used an E=2.991e14 polynomial successfully. Are you doing the polynomial selection on CPU or GPU, and do you happen to have the timings and the raw relation counts for the stage1 pass on those ranges: which stage1_norm did you use?

[QUOTE=fivemack;393695]I think the anticipatedE in that range is a little high; the C187 we did on the forum four years ago (cofactor of 2^956+1) used an E=2.991e14 polynomial successfully. Are you doing the polynomial selection on CPU or GPU, and do you happen to have the timings and the raw relation counts for the stage1 pass on those ranges: which stage1_norm did you use?[/QUOTE]
GPU I did this a while back and recently retrieved the log file to my laptop. It appears I used (default?) of 1.2e28, for stage1_norm, on the early run but the later run was changed to 2.0e28. 
That seems to me an extremely high stage1_norm value: I wonder whether you're ending up with an unreasonable number of things to filter down at stage two. I'm using stage1_norm=1e27 for my 114!+1 C187 at the moment, which gets a few million hits per range of a million C5 at a rate of a day or so GTX580time per range.

OK, that makes sense. I remember when using a GPU the stage1_norm should be changed by an order of magnitude — but I couldn’t remember which way.
I think the first range took nearly a day (on a GTX 460), the second range was quicker, and that’s why I doubled the size of the last range — back to almost a day. When I get a chance I’ll run future ranges with 1e27. Thanks for your help. 
[QUOTE=RichD;393693]It seems to get worse as the lead coefficient increases. I wonder if it would be better to search below 1M?[/QUOTE]
Finally getting something I can work with. I searched in the 700800K range and found this one. [CODE]R0: 833190005691277922377915047229543598 R1: 178575398638879069 A0: 3818155834164528069112483611011796776579442825 A1: 46509162420627906168286164471544245931 A2: 259512104283233128379348283014 A3: 20240316266268900809140 A4: 436787942678052 A5: 789480 skew 111747742.36, size 3.373e18, alpha 8.072, combined = 3.638e14 rroots = 3[/CODE] Next will be the 800s when I have another free time slot. 
Nothing better to report. The best in each range are listed.
[CODE]800900K skew 106635723.26, size 3.035e18, alpha 7.789, combined = 3.424e14 rroots = 3 600700K skew 261595249.51, size 2.554e18, alpha 8.178, combined = 3.038e14 rroots = 3[/CODE] 
Is it worth posting this in the polynomial request thread? It would be nice to get this sieving at some point soon as the M991 job is tailing off.

I'll do some poly searching on it, so really it's just wombatman of the regulars who'd be likely to see it there; of course, may as well post there anyway. I'll start at 3.6 million.
I'll be interested to explore sieve timings for 15e/33 bit vs 16e/32 bit vs 16e/33 bit for this one. I wonder if 15e/34 bit might be usable how difficult would it be to apply the 16e patches that opened up 34/35 bit sieving to the 15e siever? I suppose that's folly for a shared project since it doubles the data uploads, but I'm curious. 
Nothing better is found.
[CODE]9001000K skew 75780745.63, size 2.645e18, alpha 6.832, combined = 3.103e14 rroots = 3 500600K skew 337815414.30, size 2.791e18, alpha 8.832, combined = 3.216e14 rroots = 3 400500K skew 126028353.96, size 3.167e18, alpha 7.133, combined = 3.516e14 rroots = 5[/CODE] 
Rich
You should not discard that 3.51 poly from the 400500k range. Score is only accurate as a predictor of sieve speed within 57%, so any poly within 10% of the score of your best could actually sieve best. I'm doing some testsieving now with the best scoring poly you've found so far, but you should post (or test) the 3.51 also. 
[QUOTE=VBCurtis;396971]I'll do some poly searching on it, so really it's just wombatman of the regulars who'd be likely to see it there; of course, may as well post there anyway. I'll start at 3.6 million.
I'll be interested to explore sieve timings for 15e/33 bit vs 16e/32 bit vs 16e/33 bit for this one. I wonder if 15e/34 bit might be usable how difficult would it be to apply the 16e patches that opened up 34/35 bit sieving to the 15e siever? I suppose that's folly for a shared project since it doubles the data uploads, but I'm curious.[/QUOTE] Assuming the binaries were compiled from the same source the 15/16 doesn't matter for >33 bit sieving. All that is needed as a modification to the source is commenting out the restriction. Another thing I hope to try is doing some sieving at very low special q with the f variant of the siever. I want to test the duplication level. Relations can be found very quickly at small q as long as you can sieve below the factorbase bound. 
A couple of corehrs of test sieving shows that 16e/33 sieves about 10% slower than 15e/33, and 16e/34 is not faster than 16e/33 (about 70% more relations found per unit time, but 70% more needed and a larger matrix to solve).
15e/33 with a/rlim=314M (chosen by python script) and two large primes yields over 4, so 15e/32 and 16e/32 can be considered. I'll continue tinkering with testsieving this evening. The 16e/34 yield near 20 was a new experience for me. The 16e yield is so high that a qrange of 70M or so would be enough. Does that mean alim/rlim of 300M is too big? Thanks for the reply about 15e/34! I haven't compiled the sievers, but this may motivate me to try. 
[QUOTE=VBCurtis;396981]... you should post (or test) the 3.51 also.[/QUOTE]
OK [CODE]R0: 943458323040788615580657516498371403 R1: 294791163397249211 A0: 3611166902472558851670819726384084615231028480 A1: 126541352673573186758634973864132850288 A2: 2209274649814415688972259393648 A3: 28394659605543653907266 A4: 169757533624755 A5: 424080 skew 126028353.96, size 3.167e18, alpha 7.133, combined = 3.516e14 rroots = 5[/CODE] 
[QUOTE=RichD;397044]OK
[CODE]R0: 943458323040788615580657516498371403 R1: 294791163397249211 A0: 3611166902472558851670819726384084615231028480 A1: 126541352673573186758634973864132850288 A2: 2209274649814415688972259393648 A3: 28394659605543653907266 A4: 169757533624755 A5: 424080 skew 126028353.96, size 3.167e18, alpha 7.133, combined = 3.516e14 rroots = 5[/CODE][/QUOTE] Using 15e/33bit, this sieves 46% faster than the higherscoring poly you posted earlier, over an admittedly small sample of 3 1k intervals. Please keep posting any 3.5 or better polys! I haven't found anything scoring that well yet. 
2 Attachment(s)
[QUOTE=VBCurtis;397031]A couple of corehrs of test sieving shows that 16e/33 sieves about 10% slower than 15e/33, and 16e/34 is not faster than 16e/33 (about 70% more relations found per unit time, but 70% more needed and a larger matrix to solve).
15e/33 with a/rlim=314M (chosen by python script) and two large primes yields over 4, so 15e/32 and 16e/32 can be considered. I'll continue tinkering with testsieving this evening. The 16e/34 yield near 20 was a new experience for me. The 16e yield is so high that a qrange of 70M or so would be enough. Does that mean alim/rlim of 300M is too big? Thanks for the reply about 15e/34! I haven't compiled the sievers, but this may motivate me to try.[/QUOTE] Assuming you run on Windows then these binaries should work I think. How hard will the matrix be for this one? I would like to fiddle around and work out the duplicate rate after some sieving with the f variant. Will 4 GB be enough for the filtering? My core 2 will likely be too slow for actually doing the matrix. 
I could probably do filtering/matrix.

I suspect the filtering and the matrix job would fit in 16GB but not in anything significantly smaller.

[QUOTE=fivemack;397204]I suspect the filtering and the matrix job would fit in 16GB but not in anything significantly smaller.[/QUOTE]
I could indeed do it then  but not too much larger then this. (16 GiB happens to be what I have.) 
[QUOTE=henryzz;397183]Assuming you run on Windows then these binaries should work I think.
How hard will the matrix be for this one? I would like to fiddle around and work out the duplicate rate after some sieving with the f variant. Will 4 GB be enough for the filtering? My core 2 will likely be too slow for actually doing the matrix.[/QUOTE] Your 15e siever works fine to try 34bit lp. Thanks! I haven't played with f yet, but I appreciate you posting those, too. 
[QUOTE=VBCurtis;397238]Your 15e siever works fine to try 34bit lp. Thanks! I haven't played with f yet, but I appreciate you posting those, too.[/QUOTE]
The f variant allows sieving below the factorbase bound. It can also sieve composite special q. The maximum number of factors in a special q can be controlled with d. Sieving composite special q seems to slow down relation finding slightly as far as I can see so I use d 1. Sieving small q can provide very good yield. How much this lowers the secs/rel depends on the size of the number and the parameters chosen. The duplicate rate may be higher with the lower q. I would suggest using the f variant is probably a good idea once you are much below the factor base bound. I suspect that this number is a bit too big for me to experiment in a useful way with my limited resources. I can only use 2 out of 4 cores sieving due to memory usage with the factorbase bounds you chose. Reducing the bound to 100M doesn't harm speed much and yield isn't an issue as you noted earlier. I noted that hardly any time is used on the quadratic sieve factoring for large primes. 
300M for alim/rlim does seem too big. I'll use 180M for my next tests.
I tried 3 large primes with 33 bit and found sec/rel improved almost 20%, but yield stayed roughly constant. Sieving something like 30M to 200M should be enough to build a matrix, but perhaps too big a matrix for dubslow or I to solve on home equipment (I have a 6core i7 with 16GB). If 15f is more productive down really low, perhaps a sieve range like 5M to 160M would be faster. 
C186 @ i5232
Another one to play with/test from the 1.01.1M range.
[CODE]R0: 784171785411817668204933354637697383 R1: 226807616081850997 A0: 5659929256521862665448251658528575438240 A1: 8112093496713326197144222976345393744 A2: 2464554855231923962723017259338 A3: 93313161866957928016503 A4: 1133441776831664 A5: 1069068 skew 43803291.66, size 3.577e18, alpha 7.815, combined = 3.690e14 rroots = 5[/CODE] 
Rich
This new one, score 3.69, sieves just worse than the 3.64 find from last week. Both are 710% worse than the 3.51. 3 days on my GPU turned up a 3.05 and 3.01 as best, so I'll let you find the good ones and just keep testsieving what you provide. One of the 3.6's "should" sieve 46% better than the 3.51 (if it sieves as well compared to its score as the 3.51 does). If we find such a thing, I think we could stop the poly search at that time. I've now tested the f siever the binary works! For 15e/33 bit, searching low Q values has a time per relation equal to 15e at its best q (right around half of alim, so 90M for my tests with the poly scored at 3.51e14). I haven't done enough testing to find where the f and e have the same time per relation, but something like using f from 5M to 60M and e from 60M to 160M should provide enough relations. Yield is better with f, I guess due to the full factor base being used? I used the d 1 flag as Henry suggested. Sieve time estimate: 600M raw relations at 33bit @ 0.11 or 0.12 sec/rel > 1.2 Megaminutes singlecore > ~7 months on quadcore desktop. I'll contribute up to a quadcoremonth. If the 15f siever generates a higher duplicate rate at low Q, my estimates are low. But, we may find a 5% better poly still! 
[QUOTE=VBCurtis;397413]Rich
This new one, score 3.69, sieves just worse than the 3.64 find from last week. Both are 710% worse than the 3.51. 3 days on my GPU turned up a 3.05 and 3.01 as best, so I'll let you find the good ones and just keep testsieving what you provide. One of the 3.6's "should" sieve 46% better than the 3.51 (if it sieves as well compared to its score as the 3.51 does). If we find such a thing, I think we could stop the poly search at that time. I've now tested the f siever the binary works! For 15e/33 bit, searching low Q values has a time per relation equal to 15e at its best q (right around half of alim, so 90M for my tests with the poly scored at 3.51e14). I haven't done enough testing to find where the f and e have the same time per relation, but something like using f from 5M to 60M and e from 60M to 160M should provide enough relations. Yield is better with f, I guess due to the full factor base being used? I used the d 1 flag as Henry suggested. Sieve time estimate: 600M raw relations at 33bit @ 0.11 or 0.12 sec/rel > 1.2 Megaminutes singlecore > ~7 months on quadcore desktop. I'll contribute up to a quadcoremonth. If the 15f siever generates a higher duplicate rate at low Q, my estimates are low. But, we may find a 5% better poly still![/QUOTE] I have been finding some numbers are better for the f siever than others. Different fb bounds might be better at very low q. 
Since the sieving probably won't take more than 100 wallclock days, it's not worth more than 5 more days of polynomialsearching to try to find an unlikely 5%better polynomial.
(I will not have any spare cycles until the beginning of April, but at the beginning of April I will be able to put up to four modern quadcores on the job) 
I hope nobody has started sieving on this yet. I decided to tackle it:
[url]http://factordb.com/index.php?id=1100000000670257174[/url] 
[QUOTE=ryanp;398929]I hope nobody has started sieving on this yet. I decided to tackle it:
[url]http://factordb.com/index.php?id=1100000000670257174[/url][/QUOTE] Check the link again, it is already factored. I assume someone submitted the factors between your post and now. 
[QUOTE=MiniGeek;398930]it is already factored.[/QUOTE]
He meant it as a report that the number is now factored, not as a reservation ;). 
[QUOTE=MiniGeek;398930]Check the link again, it is already factored. I assume someone submitted the factors between your post and now.[/QUOTE]
I suppose that someone was ryanp. 
[QUOTE=VictordeHolland;398932]He meant it as a report that the number is now factored, not as a reservation ;).[/QUOTE]
[QUOTE=rajula;398933]I suppose that someone was ryanp.[/QUOTE] :redface: I read his post as saying just the opposite. 
It picked up a 7, yuck. Thanks, Ryan!
Could you post the log, or at least the parameters you chose for this? 
C194 @ i5236
Yuck, yay!
It lost the 7. The first decrease in years ?? 
It has ~t50 (here), and most likely Ryan already grilled it well, maybe a t55.
Best to run 3e8 curves on it now. Or 110e6 on smaller computers, if you wish. 
Was 5234 a lucky ECM hit or just a whole bunch of hardware to gnfs a C139 in only ~6 hours? I imagine that's well within Ryan's capabilities...?

[QUOTE=Dubslow;398981]Was 5234 a lucky ECM hit or just a whole bunch of hardware to gnfs a C139 in only ~6 hours? I imagine that's well within Ryan's capabilities...?[/QUOTE]
Well within. EDIT: I've run about 7000 3e8 curves sometime earlier on the c194. EDIT2: And 12,000 860e6 curves. No factor. 
c194 @ i5236
Some poly search but not very successful.
Search leading coeff. 400k1500K. The best so far. [CODE]# expecting poly E from 1.22e14 to > 1.41e14 R0: 28802679190347950640622961076905085992 R1: 407302386245641081 A0: 133724758664977289938633386515649724519728371415 A1: 3087354419180727365131065771008983083315 A2: 8198257518534189800659473133083 A3: 91339106683419659560699 A4: 1417256419326482 A5: 683400 skew 211980820.13, size 5.400e19, alpha 7.736, combined = 1.169e14 rroots = 3[/CODE] Others were: 1.069e14 1.058e14 1.055e14 1.055e14 (two different ones) I will look further as time permits. 
I expanded my range from 1001800K and only found one better score than the previously mentioned.
[CODE]R0: 41022789855819100446245236901416664699 R1: 1019636817060566507 A0: 384112992671117638711020950081661326068804342572 A1: 1976467057936727996025149999228597147176 A2: 78157658069982024722140360208107 A3: 295747804241547935485310 A4: 945198674876430 A5: 116604 skew 316580135.19, size 5.565e19, alpha 7.880, combined = 1.188e14 rroots = 5[/CODE] More when time permits. 
I'l testsieve these two, as well as any others posted with score higher than 1.15e14. I'll try a GPUday or so on poly search myself, mostly to get a sense of how rare/nice your finds are.
I assume 16e/33 is best, but I'll try 15e/34 and 16e/34 for posterity. 
C194
I’ve searched 02.2M and nothing better has surfaced. Is it worthwhile to do a little more since we are not in the expected range?
I have no experience with numbers this big. I wouldn’t know what parameters would be best. I can assist on the polynomial selection and help with sieving. Some one else would have to coordinate this effort. 
[QUOTE=RichD;401026]I’ve searched 02.2M and nothing better has surfaced. Is it worthwhile to do a little more since we are not in the expected range?
I have no experience with numbers this big. I wouldn’t know what parameters would be best. I can assist on the polynomial selection and help with sieving. Some one else would have to coordinate this effort.[/QUOTE] Yes. The big guns who have previously done numbers this big usually search with coefficient 50M or more, to reduce the skew a little while not changing the [expected] score. However, the bestscoring polys seem to have lower coeffs, so I don't think you're wasting effort searching the low coeff space. Rule of thumb for poly select is to spend a minimum of 3% of the expected project length on poly searching; for a C150, that's around a day, so for C194 that's more days than you and I care to spend (say, a GPUyear). So, we keep searching. Perhaps if we find a niceenough polynomial, Ryan will do the heavy lifting again for us. I'll have a GPU free in a day or two, and I'll start searching at 40M. I haven't trialsieved your first two polys yet. 
[QUOTE=VBCurtis;400747]I'l testsieve these two, as well as any others posted with score higher than 1.15e14. I'll try a GPUday or so on poly search myself, mostly to get a sense of how rare/nice your finds are.
I assume 16e/33 is best, but I'll try 15e/34 and 16e/34 for posterity.[/QUOTE] Posterity is useless, since I have no idea how many relations 34LP would need vs 33 for this job. The first poly you posted, a5 = 683400, sieves 10% or so faster than the second poly a5 = 116604, after a single 1k test region at Q=50M. I'll test a few more regions to confirm. I picked alim=400M, without any good reason. 
1.72x the number of relations depending on the size of number.

[QUOTE=henryzz;401219]1.72x the number of relations depending on the size of number.[/QUOTE]
Exactly. So, if testsieving 34LP find relations 80% faster than 33, and we need somewhere between 70% more and 90% more, we can conclude.... nothing. This is exactly the case for my small test 1.82 sec/rel at 33 bit vs 1.02 at 34 bit (on an i7 laptop, about 1/3rd the speed of most desktops). Well, I suppose there is one useful conclusion: Since it's not obvious that 34 is faster, the doubling of data to manage trumps the unknown improvement in sieve effort. 
4 GPUdays have turned up a 9.56 and a 9.40 poly, not competitive to Rich's finds. I'll keep looking.
I have covered 50 to 50.2M and 40 to 40.5M with stage 1 norm 2.1e28. This selection leaves 9 pieces for each coeff, meaning anyone else who has a go at the same ranges I chose has a 1/9 chance of duplicating work. Not that there's any incentive to search the same range... 
Nothing better from 03M.
Third best is 1.1.03e14. I'm using stage1_norm=1e28 at these low values. Takes about 8 hours on GTX 460 per 100K leading coeff. Continuing whenever I get a chance. 
Nothing better to 4.5M leading coefficient.
I should have more time on a GPU in the coming days. I may reach 10M within a couple weeks. 
c194 @ i5236
Overnight, a new number two popped out.
Still not in the desired range. [CODE]R0: 19590035041723157218547011054582449403 R1: 975334194934990309 A0: 24137781628053720561203589791084388125707564416 A1: 3729890152208266331360779006510859657032 A2: 64245986828706004956243043613430 A3: 130509958297049266684611 A4: 2223170352185530 A5: 4695300 skew 164421929.49, size 5.428e19, alpha 7.477, combined = 1.176e14 rroots = 5[/CODE] 
Passing through 7M (A5:) with nothing better than previously mentioned.

rich, remind me how to run msieve, and i'll try to find something.
Since it's been one year and a half I haven't run a custom poly selection, I hav forgotten almost everything. But i didn't forgot this thread [url]http://mersenneforum.org/showthread.php?t=18368&highlight=msieve&page=42[/url] 
Thanks. According to my notes I perform 4 steps. There is further optimization using stage2norm but I haven’t research it yet.
[CODE]1. msieve g 0 t 3 mp1 “stage1norm=2e28 2000000021000000” nps v 2. sort g k 10 msieve.dat.ms  head <num> > <tempfile> 3. replace msieve.dat.ms with the <tempfile> 4. msieve g 0 npr[/CODE] Place the “n” number in the worktodo.ini file. Step 1 has the norm picked for this number. The X,Y is the range for the leading coefficient (20M21M). This step takes hours to run. I can run a 100k interval on a GTX 460 in less than 8 hours. In step 2, the k 10 is for a 5 degree polynomial. (k 11 is used for a 6 degree.) The <num> is the numéro du jour or the number of records to keep. I usually pick anything from 30 to 50. This step takes a few seconds. Step 3, the new <tempfile> replaces the old msieve.dat.ms for the last step. Step 4 takes around 10 minutes depending on the number of records saved from the sort. VBCurtis has a good intro in this [url="http://www.mersenneforum.org/showpost.php?p=403501&postcount=107"]post[/url]. 
ok, i'll try to find something, quick and dirty. Going on a trip from monday morning to the 21.. So i'll probably post something monday and will resume the search after my return (if you are still looking for a good poly)

All times are UTC. The time now is 11:31. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2022, Jelsoft Enterprises Ltd.