I use a similar process to RichD, but I've written scripts to automatically find the sort key: [code]
# Find out how many fields each line has. KEY=`head 1 msieve.dat.ms  wc w` # The sort key is 5+the polynomial degree. 9 for degree 4, 10 for degree 5 and 11 for degree 6. So sanity check it. if (($KEY<9)) ; then exit;fi if (($KEY>11)) ; then exit;fi # Now sort msieve.dat.ms on the last field on each line and save only the top few entries. sort g k $KEY msieve.dat.ms  head 1000 >msieve.dat.ms.cut [/code] Or in perl: [code] # check how many fields there are in $PREFIX.msieve.dat.ms open(IN,"< $PREFIX.msieve.dat.ms") or die "Can't read $PREFIX.msieve.dat.ms $!"; my $rec = <IN>; my @words = split / /,$rec; close(IN); my $sortkey = scalar @words; if (($sortkey<9) or ($sortkey>11)) { logwrite("Invalid sort key $sortkey"); logwrite("First record in $PREFIX.msieve.dat.ms rec was $rec"); logclose(); die; } # Sort hits on the last field (score, lower is better) and save just the best ones. $cmd = "sort g k $sortkey $PREFIX.msieve.dat.ms  head 200 >$PREFIX.msieve.dat.ms.cut"; $time = localtime time; print "$time" if($ECHO_CMDLINE); logwrite("=>$cmd") if($ECHO_CMDLINE); $res=system($cmd); [/code] It's not as bad as it looks, I've added a lot of sanity checks to the code. But it assumes all the polys msieve generates will be the same degree. So it will go wrong if we get a mixture of degrees in the same file. Chris 
We tell msieve what degree to look for, so we should never have multiple degrees of polys in the same file. I suppose it could happen if someone sent multiple GPU runs to the same file, so we shouldn't do that!
I keep all my GPU runs, but I tightly restrict my sizeopt output so those files are quite small. Setting stage2_norm merely reduces the output to the .ms file; it doesn't change the poly search process itself. That means I don't use the 'head' part of the process, as I just npr the entire (small).ms file. This allows me to tinker with the rootopt settings on a "greatest hits" collection of, say, the best 200 hits in a week rather than just the 100150 I get in a day from the 460M GPU. I spend a lot more time on the npr step than 10 minutes! But, my batches are 100200 hits per run, and the CPU is an i7laptop chip at 1.7ghz. I figure an extra halfhour per day of CPU time on the small chance that my 100th to 150th hits produce a good poly is worth it, compared to the large amount of watthours and GPU time it took to generate those hits. Generally, I'l try rootopt on any candidate from the file whose norm is within an order of magnitude of my verybest hit. For this particular number, I get 12 hits a day at 3e25, so I try the rootopt step on anything better than 2.5e26. I hope this helps! 
Nothing better through 12M (A5:).

back, just tried a few hours, and got
[code] Fri Jun 26 16:23:01 2015 R0: 14135786849137023876913291436065506569 Fri Jun 26 16:23:01 2015 R1: 18969287860104342181 Fri Jun 26 16:23:01 2015 A0: 3957161515503501886407430713481202014028653753560 Fri Jun 26 16:23:01 2015 A1: 237172237772281744778864702626993430912766 Fri Jun 26 16:23:01 2015 A2: 3750924488863288259553793715871617 Fri Jun 26 16:23:01 2015 A3: 6194491949591007980204110 Fri Jun 26 16:23:01 2015 A4: 37993686710338596 Fri Jun 26 16:23:01 2015 A5: 24001560 Fri Jun 26 16:23:01 2015 skew 304604311.11, size 3.944e019, alpha 9.639, combined = 9.389e015 rroots = 5 [/code] 
That's definitely a polynomial, but the Escore is 12% worse than RichD's current best.

I know, I just worked on it for a few hours.

best score now @ 1.062e014, getting closer

Passing through 13M nothing better.
A couple at 1.0xx but I can't seem to break past 1.2. 
Has anyone tried the CADO tools? Someone might find a better poly with that.

ok, since I don't get any score above 1.062e014 i'll post it
[code] R0: 14016133491555487650798640794471329869 R1: 15145334617368450031 A0: 108437424465414630272002077935902244002227944784 A1: 7126524296785042580075549282652665939040 A2: 94907948987266914587625146628433 A3: 3383537662813944945185927 A4: 8159033334328823 A5: 25043688 skew 120539617.50, size 4.699e019, alpha 8.378, combined = 1.062e014 rroots = 5 [/code] i'lll continue to search a bit 
C194
Fourth best but a much lower skew.
[CODE]R0: 15667848600695216207592357699117730834 R1: 10830955214877446659 A0: 14732880060224579990675942933044193009933005725 A1: 424638831939124759282888841878408667673 A2: 63182328735733107024500149152957 A3: 1070752312995054014096777 A4: 9921527589311698 A5: 14348040 skew 104921236.45, size 5.311e19, alpha 8.343, combined = 1.159e14 rroots = 3[/CODE] 
C194
As I am closing in on 20 Mil, a new number 1 (based on evalue) was found.
[CODE]R0: 14769649678254236071870801775658732945 R1: 4212581526470259037 A0: 260758480321916667685688193169037338514796556784 A1: 28369420161080102434132404082540766030308 A2: 319515688007256176977918750453824 A3: 976782223305603155968711 A4: 8240833399911082 A5: 19274736 skew 191733447.30, size 5.827e19, alpha 8.959, combined = 1.221e14 rroots = 5[/CODE] I will shut down for a week after I complete 020M. 
I'm sorry to say this, but I'm worried that the skews on all these polynomials are sufficiently high that the sieving yields will not be good; in my experience things drop off badly around skew=10^8. I would strongly recommend writing a little perl script to trialsieve (you don't need to sieve very far, say 3K relations at each of 100M, 200M and 300M  look at the timeperrelation figure because it's much more stable than the yield) all the polynomials with E>1e14.
I would be tempted to throw a couple of GPUweeks at running c5=100M..200M with a stage1_norm set so that it takes a couple of GPUweeks: I would do it myself if either of my GPUs were working. 
Would degree 6 polys be worth trying? Or is it too far below the crossover from degree 5?
Chris 
[QUOTE=chris2be8;406368]Would degree 6 polys be worth trying? Or is it too far below the crossover from degree 5?
Chris[/QUOTE] It's pretty far below the crossover, but spending a GPU day and testsieving will tell us how far. My guess from previous tests on some of the previous forumdistributed poly searches is 20% or so worse, but I'd like empirical data. Mr Womack What do you suggest for alim/rlim to do the testsieving? I tried 400M on one of the first polys posted for this composite, score 1.15e14. Yield for 16e/33LP was in the low 3's per Q for Q ~ 50m. The test was on a laptop that is now dead, so I'll redo the test on my desktop with a wider range of Q to get sec/yield data. That dead laptop was my CUDA install, so I cannot run poly search at this time. 
Actually these polynomials don't look too bad when I do trivial test sieving
[code] > cat gnfs.1221 n: 13546883948707557394740613723103946152064130242660227238148207099694217967258945948090755858238134930723530286428679498807036076305750824074596099018755707927936366610675572615755463725831948023 Y0: 14769649678254236071870801775658732945 Y1: 4212581526470259037 c0: 260758480321916667685688193169037338514796556784 c1: 28369420161080102434132404082540766030308 c2: 319515688007256176977918750453824 c3: 976782223305603155968711 c4: 8240833399911082 c5: 19274736 skew: 191733447.30 lpbr: 33 lpba: 33 mfbr: 66 mfba: 66 alambda: 2.6 rlambda: 2.6 alim: 300000000 rlim: 300000000 > ./gnfslasieve4I16e gnfs.1221 a f 300000000 c 3000 total yield: 11269, q=300003001 (0.55676 sec/rel) [/code] 3.7 relationsperQ, so probably could put alim=240M, sieve 20M240M and get a decent matrix in about twelve Ivybridgethreadyears. The 1.159e14 polynomial got [code] total yield: 10352, q=300003001 (0.58905 sec/rel) [/code] so using the 1.221e14 instead might reduce the sieving time by about six threadmonths; we've spent three realtimemonths searching to get these polynomials, which makes me think it's about time to stop polynomial selection and start some sieving. 
Since yield is pretty good, might this be a candidate for 15e/33LP on NFS@home? I don't know of a GNFS194 attempted on 15e before.
How big is the matrix for the GNFS192.3 on 15e/32? That's postprocessing now, right? 
It could be done with 15e;
[code] > ./gnfslasieve4I15e gnfs.1221 a f 300000000 c 3000 total yield: 4917, q=300003001 (0.62273 sec/rel) > ./gnfslasieve4I15e gnfs.1221 a f 600000000 c 3000 total yield: 4595, q=600003043 (0.76412 sec/rel) [/code] so 1.6 relations per Q, and 10% slower than using 16e. This is a big job, you might well need a billion relations, so Q=20M..600M. I'd expect it to be possible to oversieve enough for the matrix to fit on a 32GB machine. It will take a month or so of realtime on nfs@home. 
[QUOTE=VBCurtis;407179]How big is the matrix for the GNFS192.3 on 15e/32? That's postprocessing now, right?[/QUOTE]
[code] Fri Jul 17 12:26:49 2015 matrix is 31471012 x 31471237 (14625.0 MB) with weight 3892604835 (123.69/col) Fri Jul 17 12:26:49 2015 sparse part has weight 3519133867 (111.82/col) Fri Jul 17 12:26:49 2015 using block size 8192 and superblock size 1179648 for processor cache size 12288 kB Fri Jul 17 12:30:00 2015 commencing Lanczos iteration (6 threads) Fri Jul 17 12:30:00 2015 memory use: 12537.6 MB Fri Jul 17 12:32:06 2015 linear algebra at 0.0%, ETA 692h24m 3229 pumpkin 20 0 17.6g 17g 1372 R 540 54.9 79241:10 msieve [/code] (so it doesn't quite fit in a 16G machine) For 3270.698, I ran a trial postprocessing on Monday and got 450723464 relations; 334109142 unique; 287274746 unique ideals Mon Aug 3 22:20:02 2015 weight of 26771795 cycles is about 3213101586 (120.02/cycle) so I've added 50MQ more sieving to see what changes. 
Thank you for the data! Very interesting, and shows the boundary for 16GB machines as GNFS ~190. I'll be very interested to see if 15e/33LP create a substantially larger matrix for the new job.

I have a 64G machine that could be used if the matrix gets really big (hopefully only the intermediate processing steps get really big and the matrix fits in 32G). Shall I stick this one onto the NFS@home queue and wait until autumn?

This might be the best way to handle it. The interest may not last to finish the sieving if requested by forum members.
Since the last step was a decrease, it may become more interesting if the sequence can keep dropping in size. 
621M relations (484M unique, 519M unique ideals) are not enough to build a matrix for this C194. nfs@home sieving continues.

Should be getting close based on this [URL="http://www.mersenneforum.org/showpost.php?p=411821&postcount=377"]status[/URL].
Of course many things could postpone the results. 
The computer crashed and it took a day or so before I had the spare time to bring it in from the outbuilding, clear out the fans, remove the broken GTX580 and put it back. Current ETA is tomorrow evening.

There may be a problem:
[code] Sun Nov 1 13:23:24 2015 using block size 8192 and superblock size 1179648 for processor cache size 12288 kB Sun Nov 1 13:26:41 2015 commencing Lanczos iteration (6 threads) Sun Nov 1 13:26:41 2015 memory use: 12743.4 MB Sun Nov 1 13:26:45 2015 restarting at iteration 418288 (dim = 26450011) Sun Nov 1 13:28:48 2015 linear algebra at 76.8%, ETA 179h57m Sun Nov 1 13:29:29 2015 checkpointing every 50000 dimensions Wed Nov 4 17:25:52 2015 lanczos error: submatrix is not invertible Wed Nov 4 17:25:52 2015 lanczos halted after 476936 iterations (dim = 30158654) Wed Nov 4 17:25:52 2015 linear algebra failed; retrying... Wed Nov 4 17:25:52 2015 commencing Lanczos iteration (6 threads) Wed Nov 4 17:25:52 2015 memory use: 12743.4 MB Wed Nov 4 17:25:53 2015 restarting at iteration 476799 (dim = 30150056) Wed Nov 4 17:27:44 2015 linear algebra at 87.6%, ETA 87h 3m Wed Nov 4 17:28:17 2015 error: corrupt state, please restart from checkpoint ... Wed Nov 4 22:39:16 2015 restarting at iteration 476008 (dim = 30100040) Wed Nov 4 22:41:18 2015 linear algebra at 87.5%, ETA 96h48m Wed Nov 4 22:41:59 2015 checkpointing every 50000 dimensions Sun Nov 8 15:13:43 2015 lanczos halted after 544333 iterations (dim = 34420477) Sun Nov 8 15:14:23 2015 recovered 28 nontrivial dependencies Sun Nov 8 15:14:25 2015 BLanczosTime: 319182 ... Sun Nov 8 16:18:55 2015 commencing square root phase Sun Nov 8 16:18:55 2015 reading relations for dependency 1 Sun Nov 8 16:19:00 2015 read 17209116 cycles Sun Nov 8 16:19:40 2015 cycles contain 57590394 unique relations Sun Nov 8 16:31:14 2015 read 57590394 relations Sun Nov 8 16:37:34 2015 multiplying 57590394 relations Sun Nov 8 19:42:12 2015 multiply complete, coefficients have about 3750.18 million bits Sun Nov 8 19:42:19 2015 error: relation product is incorrect Sun Nov 8 19:42:19 2015 algebraic square root failed [/code] I will let it run overnight and see if all the relation products are incorrect, but this could mean I have to repeat the monthlong linear algebra (at which point I would be quite inclined to ask if there's anyone here with cluster resource that could be used) 
Not in fact doomed!
1 Attachment(s)
[code]
Sun Nov 8 23:55:56 2015 prp95 factor: 14089903807407817964276328535774723205270936104996902405773782502910074615781195299113386560171 Sun Nov 8 23:55:56 2015 prp99 factor: 961460357279744811894612296748447791859181309959883950989025803121327210502873992884892881269699813 [/code] Log attached; about 706 hours on six cores i7/4930K for linear algebra on a 34.4M matrix with density 120 Next line begins 2^3*5*11 
Tough job; kudos!

Beautiful! Congrats! Tough job!
[QUOTE=fivemack;415499]Next line begins 2^3*5*11[/QUOTE] Grrr.... short living... We hoped for a downdriver, or at least no 3, 5 :sad: D3 is freaky, even when it only appears in partials... this smells like abandoning this sequence... 
Working on the next lines. still 2^3 · 5

That guideless phase didn't last long. :no:

ran 904 curves @B1 1e6 B2=1045563762, on 4788:i5241 C175. (and 50 ~ with B1=3e6 B2=5706890290) .That's it for now.

Running 2301 curves with B1=3e6.

Those are finished. Now running curves at B1=11e6.

Raise it in this thread if I should instruct my Minions to run thousands of curves.

I poked the C175 with 500 curves at B1=43e6. I'll leave more serious efforts to others.

[QUOTE=yoyo;415676]Raise it in this thread if I should instruct my Minions to run thousands of curves.[/QUOTE]
Sure. That would be fine with me. 
[QUOTE=yoyo;415676]Raise it in this thread if I should instruct my Minions to run thousands of curves.[/QUOTE]
17770 @ 11e7 would be nice and appreciated. 
Ok, they will be sent as next to the Minions.

40% done so far: [url]http://www.rechenkraft.net/yoyo//y_status_ecm.php[/url]

C175 @ i5241
[B]yoyo[/B] is about to finish t55 (ECM work).
I don’t have much experience with numbers this large. From a previous post [QUOTE]I use (GNFS digits  6)/3 rule.[/QUOTE] So this C175 needs (1756)/3 or 4/15 the way to t60. Does 11200 @ 26e7 sound about right? Another post [QUOTE]I use 0.31 * digits for GNFS[/QUOTE] Which calculates to just under t55, which means we have enough ECM. 
I have my own rule of thumb for estimating.
A 175digit number will take about 20,000 threadhours to sieve. Once you've done a t55, the probability of finding anything by doing a t60 is about (6055)/55, so one in eleven or so. So it's not worth doing more than 2,000 threadhours of ECM. The 17700@11e7 is already about 2,000 threadhours of ECM (one curve at that level takes about ten minutes) So I would say you've done enough ECM, and start polynomial selection now. 
[QUOTE=fivemack;416442]So I would say you've done enough ECM, and start polynomial selection now.[/QUOTE]
I'll start the poly search tonight. Thanks for the stats. It is the most optimal. I have no idea how long it take to sieve these large numbers since they are beyond my resources. 
My "rule of thumb" is 0.33*ndigits, this (somehow) eliminates the "3 way splitting". If it survives that, it has high chances that it will only split in two by GNFS.

I use that ratio for smaller numbers. I think the ECM work (resources) increase faster than the NFS resources needed for the same equivalent size as the composites grow larger.
But what do I know? :smile: 
I think ECM digits = digits * 0.33 was a rule of thumb when poly selection was vastly less powerful than it is now. Modern msieve, particularly with GPUstage 1, has taken a digit or two away from effective difficulty, resulting in a lower optimal ECM effort.
(digits  6) / 3 => (digits/3)  2 => about half the ECM effort of digits/3 looks like it matches up pretty well with Tom's expectedfactortime calculation. It is also likely that folks used to slightly overECM numbers; I certainly did, as ECM is efficient to run on lowspec machines compared to LLR, and convenient to run on such machines compared to NFS. As for 3way splits, I consider those a rare treat! 
It makes totally sense what you say. You also may consider that gpu's are not doing only poly, but also ecm. Unfortunately I am mostly at "yafu stage", using msieve very seldom (practically never, since yafu does everything for me), and "switching ecm to gpu", and "switching poly selection to gpu", are "high priority" items on my "todo list" since years, but never found the time/resources to play with them (i.e. find the right executables, or build them, learn the tools, find free time on the gpus/machines which most of the time run TF, gpuLL, and RLrelated tasks). It was planned some time ago, then my titan crashed (not yet fixed, bought a new one, the old one had many adventures like I put it in a oven and the capacitors blew, etc, hehe, I may post in the "cooling thread" if I find the time, I have a lot of things to tell you).
I still hope to go "gpu ecm" and "gpu poly" 100%, very soon, and then I will have to reread all these threads... grrr... 
C175 @ i5241
Found a good poly.
[CODE]N: 1561248012875604421290997452712948651673429631296532916308406424711670868236536811765897539577054945788349762744593204554580205942062187848423188814351767445694628965970805839 # expecting poly E from 1.73e13 to > 1.99e13 R0: 2921485258420054233795687624707655 R1: 11197286481893873 A0: 3648904307721440541363054106517980023512992 A1: 1228544630159808525398670589488439616 A2: 2336973510196735534005200082 A3: 4718033960360448430271 A4: 613255320209996 A5: 7335900 # skew 18500902.09, size 5.225e17, alpha 9.145, combined = 1.915e13 rroots = 3[/CODE] 
[QUOTE=RichD;416713]Found a good poly.
[CODE]N: 1561248012875604421290997452712948651673429631296532916308406424711670868236536811765897539577054945788349762744593204554580205942062187848423188814351767445694628965970805839 # expecting poly E from 1.73e13 to > 1.99e13 R0: 2921485258420054233795687624707655 R1: 11197286481893873 A0: 3648904307721440541363054106517980023512992 A1: 1228544630159808525398670589488439616 A2: 2336973510196735534005200082 A3: 4718033960360448430271 A4: 613255320209996 A5: 7335900 # skew 18500902.09, size 5.225e17, alpha 9.145, combined = 1.915e13 rroots = 3[/CODE][/QUOTE] Are you running Linux or windows? 
Linux with GTX 560 Ti.

Anybody here have a willing to give a try to use CADO polynomial selection, I receive a message from Maksym who seems think CADO is better than msieve when doing the polynomial selection stage.
Currently, I , Wenjie, Kurt, and Maksym is factoring 10^23401 using gnfs method, the poly is searched by Kurt and Maksym. Kurt uses msieve with a gpu card sieve a month, Maksym uses CADO sieve two weeks. Maksym 's poly line sieve 30% faster than Kurt's. Here is Maksym's feeling through the sieve. [CODE]"Hi guys, Sorry for the late reply. Read my old letter below. If you still have questions, please ask. I believe I used this image for Ubuntu 12.04: http://sourceforge.net/projects/osboxes/files/vms/vbox/Ubuntu/12.04/Ubuntu_12.0464bit.7z/download Username: osboxes Password: osboxes.org My (somewhat checked) observations about CADO: 1) You can download a development version after 2.1.1 from a git repository, they have a link on the site now. 2) Relation sieving in CADO inside VirtualBox is slower than relation sieving in msieve in Windows on the same machine. 3) VirtualBox allows using only a fraction of machine memory, e.g. if you have 8 Gb in Windows, you'll have ~5 Gb in Ubuntu. 4) A good msieve polynomial cannot be optimized by CADO. It does the job but the root optimized polynomial is always worse. 5) For the same reason, you can't make a SNFS polynomial better by running it through CADO root optimization. 6) CADO can't use GPU. BUT: 1) Polynomial selection in CADO is extremely good. I bet I can find a better polynomial in CADO than in GPU msieve, in less time, and probably consuming less electricity too. Let me know, Max Hi all, This is how I installed CADO on my Windows 7 machine: https://bitbucket.org/cybertools/malware_tools/issues/22/virtualboxubuntuinstallation. Just download the latest CADO 2.1.1 instead of referred CADO 2.0 and ignore the bitbucket script altogether. Also make sure to switch BIOS to operate 64 bit Ubuntu in VirtualBox if you have a 64 bit machine. Max "[/CODE] 
Is he willing to share the two polys being compared for 10^23401?

Also what sort of GPU was Kurt using? And how was he using it (used the wrong way you could easily waste a lot of search time)?
Chris 
[QUOTE=wreck;416866]...Maksym who seems think CADO is better than msieve when doing the polynomial selection stage.
Currently, I , Wenjie, Kurt, and Maksym is factoring 10^23401 using gnfs method, the poly is searched by Kurt and Maksym. Kurt uses msieve with a gpu card sieve a month, Maksym uses CADO sieve two weeks. Maksym 's poly line sieve 30% faster than Kurt's. [/QUOTE] That's the 2340L (i.e. the 10,1170+ L, with M already factored) [URL="http://stdkmd.com/nrr/repunit/phin10.cgi?p=24#N2340L"]c189[/URL] cofactor, right? (10^23401 also has the 10,585 c266 cofactor.) CADO does have a good poly selector. Good idea for someone to try it!.. 
[QUOTE=Dubslow;416867]Is he willing to share the two polys being compared for 10^23401?[/QUOTE]
Kurt's poly (found by msieve) [CODE] n: 952292197412453381717073518174919932906570614453890307718545364847038044348090273591549618739487450764456259682247432651918464778452105306957838135819115896078660572289957993060613591472021 # norm 1.484895e018 alpha 8.148489 e 1.926e014 rroots 5 skew: 1072448109.23 c0: 2768823872072333261931988968816202213195947984000 c1: 15275653385062057929219319069678526857760 c2: 138387764172029764068756785487648 c3: 117360940966783947764667 c4: 122686529654486 c5: 9240 Y0: 10060501899165771381498290805092655859 Y1: 31126454643178352737 type: gnfs rlim: 200000000 alim: 200000000 lpbr: 32 lpba: 32 mfbr: 64 mfba: 94 rlambda: 2.6 alambda: 3.6 [/CODE] Max's poly (found by CADO) [CODE] n: 952292197412453381717073518174919932906570614453890307718545364847038044348090273591549618739487450764456259682247432651918464778452105306957838135819115896078660572289957993060613591472021 # MurphyE = 1.09e11 # lognorm 58.91 skew: 45137920.0 c0: 197006411290915294206892058826736550862182325 c1: 96891972101929844956190270575674132078 c2: 1564572888018669862818629860418 c3: 73230310522607368819384 c4: 190657861142439 c5: 1640100 Y0: 3570923609932057924385087593377792638 Y1: 205226838884523100253827 type: gnfs rlim: 200000000 alim: 200000000 lpbr: 32 lpba: 32 mfbr: 64 mfba: 94 rlambda: 2.6 alambda: 3.6 [/CODE] We sieve special q from 23M to 110M. On an i3 processor, and q=60M, range to 60M+100, the 1st poly's sieve speed is 0.896 sec/rel, the 2nd poly's sieve speed is 0.637 sec/rel. other ranges got similar result. 
[QUOTE=chris2be8;416875]Also what sort of GPU was Kurt using? And how was he using it (used the wrong way you could easily waste a lot of search time)?
Chris[/QUOTE] I'm not sure which GPU card Kurt is using, the steps is taken under the introductions from msieve's readme file, It is the first time Kurt to use msieve's GPU version to search polynomial. Before this , I and him use his i7 CPU search about 4 c15x to c16x numbers' polynomial, most time, we use np1 to find some candidates, and select the biggest e score polynomial as the final poly files. For this number , the np2 seems used too. On the other hand, it is the first time of Max using CADO to search the polynomial too. It seems like the skew of CADO and msieve is not the same, and the second poly's skew is get from CADO. Max also give a try to the number 10^4291, he says the skew should the value from [url]http://myfactors.mooo.com/[/url] after given the polynomials. Anyway, this is not the point I want to express, maybe the polynomial we choose from msieve is not good enough, and maybe both the way Kurt and Max is not the right way, I want to point out that there is another tool exist public there , its name is CADO, that could used to do the polynomial sieve. It would be nice to see more people to use it, to see whether it is better than msieve. Max sent me an article , SHI BAI, CYRIL BOUVIER , ALEXANDER KRUPPA, and PAUL ZIMMERMANN, "BETTER POLYNOMIAL FOR GNFS", 2010, MATHEMATICS OF COMPUTATION In this paper, they exclaim they find some better polynomials for RSA768 and two polynomials for RSA1024 , using the algorithm implemented in CADONFS. . 
[QUOTE=Batalov;416900]That's the 2340L (i.e. the 10,1170+ L, with M already factored) [URL="http://stdkmd.com/nrr/repunit/phin10.cgi?p=24#N2340L"]c189[/URL] cofactor, right? (10^23401 also has the 10,585 c266 cofactor.)
CADO does have a good poly selector. Good idea for someone to try it!..[/QUOTE] Yes, it is the number 10, 1170L, if use the tag system similar to Cunningham number. Kurt abbreviates this number as R2340L, it is a 189 digits number. By the way , it seems that the number 6,448+ keep silence quite a time , Bruce told me "Sieving was finished long ago, with Batalov's polynomial selection. I haven't gotten a chance to run the matrix." I am not sure wheter it is becuase the memory issue which would meet on most big number. Do you have machine with 32GB? Is it possible you run the postprocessing of this number? 
Nope, my postprocessing B+D years are behind me. (Especially for this size.)
Bruce was in contact with Greg about running MPI on his cluster for this number; that's the last I heard about that project and that was quite a while ago. 
C175 @ i5241
[QUOTE=RichD;416713]Found a good poly.
[/QUOTE] Here is one, that sieves about 510% better: [CODE] n: 1561248012875604421290997452712948651673429631296532916308406424711670868236536811765897539577054945788349762744593204554580205942062187848423188814351767445694628965970805839 # norm 7.864265e17 alpha 7.895789 e 2.263282e13 rroots 3 skew: 41354998.50 c0: 2217406006295087501003654530136566330136640 c1: 74149137268999111127815029367344384 c2: 30723141702894761175690717314 c3: 360911433696642764671 c4: 4122762472806 c5: 110880 Y0: 6756530390688759011127125642945779 Y1: 3599383523628534041 [/CODE] 
I've queued that polynomial up at nfs@home (sieving 20M..100M with 15e should get 400M relations easily)

[URL="http://mersenneforum.org/showthread.php?p=422876#post422876"]I finished the post processing[/URL], though someone else both submitted the factors to the FDB before I could and found the P33 in the next line before I. Control of the sequence has been wrested from me :smile:

I've run 7500@43e6 and 1800@110e6 on the C187.

How about you continue at 110e6 and I'll start on 260e6?

Ok, I'll run some more curves @110e6.

[QUOTE=unconnected;423502]Ok, I'll run some more curves @110e6.[/QUOTE]
+2200@110e6, total count 4000@110e6. 
[QUOTE=unconnected;423502]Ok, I'll run some more curves @110e6.[/QUOTE]
Since this post (a couple hours short of 10 days) I've run 2928 curves at 260e6 (and continuing, for now). That's just over 1900 hyperthread hours, or say 850 Sandy Bridge core hours. I'm not sure how close unconnected is to finishing the t55, but lets assume it's complete, then as according to [URL="http://www.mersenneforum.org/showthread.php?p=416442#post416442"]fivemack's post[/URL], there's around a 1 in 11 chance of finding a factor, so we should do say 1/12 of the expected GNFS time as ECM at 260e6 (since it takes a nontrivial amount of work to get to t55 too). I'm not sure how my thread hours relate to his thread hours, though. Does it seem reasonable that if GNFS175 is 20K thread hours, then GNFS187 is around 80100K? 
More like 100120k. However, the 1/11 chance applies for ECM from an alreadycomplete t55 to a complete t60. It's not the case that one should run 1/11th of the expected GNFS time, rather that one should compare whether a t60 will take less than 1/11th of expected sieve + LA time.
If we include LA time, 120k threadhours is likely on the low end. Perhaps half a t60 is enough? That would be around 1/19 chance of a factor after t55, as half a t60 is around t58. If that would take ~6k threadhours, then the 0.31*size heuristic fits well with expectedfactoreffort estimates. 
I'm mostly just wondering what fivemack's thread hours are, vs my SNB cores/hyperthreads. Once we can compare those we can likely come up with a reasonable ECM plan.

My grasp of his estimates is that one opteronthreadhour is about the same amount of work as one HTthreadhour on a typical 3.x ghz desktop intel chip.
HT is around 20% more efficient for sieve work, so at 3.4 ghz HT would effectively provide a 2ghz core, roughly the same as his opterons. 
Okay, so after accounting for pauses, lets say it took 1850 thread hours to do 2928 curves at 260e6. If we assume the GNFS will be around 100K thread hours, and t55>t60 has around 1/11th chance to find a factor, then by fivemack's rule of thumb we should do ~9K thread hours of ECM at the t60 level, which is around 2928*9000/1850 = 14244 curves at 260e6. Does that seem reasonable?

I think you're misapplying his rule of thumb. His idea is that if a t60 can be done in less time than 1/11th the expected GNFS time, the t60 is worth doing.
We can break it down digit by digit: for instance, after t55 is done t57 is a t55 worth of 260M curves. t58 is 2t55 worth of 260M curves = half a t60, etc. The nth digit of work has 1/n chance to find a factor. t55 to t60 is roughly 5/55, which is where the 1/11 comes from. 
[QUOTE=VBCurtis;424950]I think you're misapplying his rule of thumb. His idea is that if a t60 can be done in less time than 1/11th the expected GNFS time, the t60 is worth doing.
[/QUOTE] By these estimates posted, a full t60 would take about 1850/2928*42000 = 26.5K thread hours, which is rather more than the 9K thread hours which is 1/11th of the GNFS. So we should definitely not be doing a full t60. The question then becomes how much of that t60, and simply getting as many curves as possible for the 9K thread hours seems reasonable to me, even if it's a misapplication of the rule of thumb. Unless you have any better ideas? :smile: I barely qualify as literate in this arena, far from expert. 
I like no more than 14k curves at 260M. That's 1/3rd of a t60, which gets us to about t57.

Okay, I'm getting bored. I've done 3680 total curves of 14K target at 260e6. unconnected, have you completed a t55?

[QUOTE=Dubslow;425172]Okay, I'm getting bored. I've done 3680 total curves of 14K target at 260e6. unconnected, have you completed a t55?[/QUOTE]
Perhaps [B]yoyo[/B] may be interested in queuing 10,000 curves @ 26e7. 
[QUOTE=RichD;426760]Perhaps [B]yoyo[/B] may be interested in queuing 10,000 curves @ 26e7.[/QUOTE]
We currently need 13.5K curves at 110e6 to complete t55, and 8.4K at 260e6 to complete the 14K discussed above. I'm still running the 260e6 curves here, though only on one i72600k. Maybe if yoyo@home did, say... 15K@260e6? 
[QUOTE=Dubslow;426764]
Maybe if yoyo@home did, say... 15K@260e6?[/QUOTE] I'll do it. They are now sent out to the Minions. yoyo 
Factored :D
[url]http://www.rechenkraft.net/yoyo//y_factors_ecm.php[/url] 
[QUOTE=yoyo;426878]Factored :D
[url]http://www.rechenkraft.net/yoyo//y_factors_ecm.php[/url][/QUOTE] Gah! I've run more curves in total than yoyo@home did! Not fair! :razz: Edit: I'd done 5792@260e6 vs 4330@260e6 from yoyo@home 
[QUOTE=yoyo;426878]Factored :D
[url]http://www.rechenkraft.net/yoyo//y_factors_ecm.php[/url][/QUOTE]:w00t: 
The C156 splits as a P38*C119, has anyone else made it this far yet or shall I just let the NFS on the C119 run?

[QUOTE=Dubslow;426879]Edit: I'd done 5792@260e6 vs 4330@260e6 from yoyo@home[/QUOTE]
Hmmm... Somethin's fishy. What's the probability that a p53 will resist this many curves? 
[QUOTE=axn;426937]Hmmm... Somethin's fishy. What's the probability that a p53 will resist this many curves?[/QUOTE]
10k curves at 260M is just under 1.2*t55, so a p53 would be found ~2.3 times over, on average. I guess you could say 1 out of e^2.3 p53s would resist that many curves? So, like 8% of the time? I think the 110M level was mostly skipped for this number, jumping straight from a t50 to curves at 260M, so it's not as crazy a miss as it seems on first glance. 
[QUOTE=VBCurtis;427003]10k curves at 260M is just under 1.2*t55, so a p53 would be found ~2.3 times over, on average. I guess you could say 1 out of e^2.3 p53s would resist that many curves? So, like 8% of the time?
I think the 110M level was mostly skipped for this number, jumping straight from a t50 to curves at 260M, so it's not as crazy a miss as it seems on first glance.[/QUOTE] 4K@110e6, 10K@260e6. Not the craziest miss in the world. I prefer to think of it as a lesson on why we don't halfheartedly sortakindanotreally complete the calculated amount of ECM. More than once I was tempted to say "screw it" and go download CADO... :smile: Any word on 5236:C195? 
[QUOTE=Dubslow;427004]Any word on 5236:C195?[/QUOTE]
Did you run any ECM on it after fully factoring i5235? It will need up to at least t60+, possibly a sizable fraction of t65. I'm willing to help with some of the lower tier ECM but can't start for a few days. 
[QUOTE=swellman;427018]Did you run any ECM on it after fully factoring i5235? It will need up to at least t60+, possibly a sizable fraction of t65. I'm willing to help with some of the lower tier ECM but can't start for a few days.[/QUOTE]
Although my factorization finished, I was away from the computer at the time. Someone else had already completed and reported 5235, so I haven't touched 5236. 
I did 4400@11e6 and 2000@43e6 on the C195.

Given how many numbers we look at 8% will happen fairly often.

[QUOTE=unconnected;427028]I did 4400@11e6 and 2000@43e6 on the C195.[/QUOTE]
+1800@43e6 
+3600@43e6
Total curve count from me: 4400@11e6 and 7400@43e6. Someone plans to do some curves @11e7 or we'll ask yoyo for help? 
I've dropped yoyo a note about this thread. Hoping they can bring some help.
What level of ECM does this number eventually require before swapping to GNFS? A full t65? 
Thanks for the pm. I sent now 18000 curves @11e7 to the minions.

Oops, I'm at 2800@43e6 myself. Oh well. Maybe yoyo should only do 17K@110e6 before switching to 42K@260e6.
A full t60 is obviously required, but it will take some analysis to find out how much of the t65 we should do. While yoyo is handling the t55 and t60, I'll run some curves at 850e6 to get a CPU time analysis started. VBCurtis/fivemack, what do we estimate the GNFS time will be? 2^(8/5)*GNFS187 ~= a bit over 300K thread hours? 
[QUOTE=Dubslow;427867]
VBCurtis/fivemack, what do we estimate the GNFS time will be? 2^(8/5)*GNFS187 ~= a bit over 300K thread hours?[/QUOTE] That depends on parameter choice. Will it be 15e at NFS@home? GNFS195 is stretching 15e, so the time estimate should be padded a bit compared to the usual 16e choice for a number this size, say by 810%. 15e/33 should work okay, just a bit slower than 16/33. I'll see if I can improve on this rough 325k threadhr estimate by finding results from similarsized factorizations on the forum. 
After a fiasco in which I didn't realize how much memory stage 2 would take, causing my computer to lock up due to lack of ram (the post mortem swapcleaning took ten minutes!), it seems that each curve takes around 1.77 thread hours at 850e6. Given that a t65 requires 69.4K curves, we're looking at roughly 123.3K thread hours for a full t65, and I estimate there's roughly an 8% chance of a factor existing in the 6165 digit range, far less than the 30+% chance to make the whole t65 worth it by fivemack's rule.
Abusing the rule same as before would suggest ~30K thread hours at the t65 level (~17.5K@850e6), but I'm still not convinced how useful this "technique" is. 
Repeat your analysis for a full t60 to first demonstrate that's worth it. If it's barely worth it to complete a t60, we can run just a couple thousand curves at 850M and call it good. If it's easily worth it to complete t60, then we know it's best to go on.
I'm not sure more than a full t60 is justified. 
[QUOTE=VBCurtis;427901]Repeat your analysis for a full t60 to first demonstrate that's worth it. If it's barely worth it to complete a t60, we can run just a couple thousand curves at 850M and call it good. If it's easily worth it to complete t60, then we know it's best to go on.
I'm not sure more than a full t60 is justified.[/QUOTE] Actually I was just doing that. I'm working up a somewhat more generalized formula that thinks along similar lines, and the numbers I got suggested that even 1000 curves at 850e6 weren't worth it... so I started some test curves at 260e6 around half an hour ago. [code]In [2]: for n in range(5, 75, 5): print("{}>{} simple odds: {} better odds: {}".format(n, n+5, 1/(n/5), 1/(n+1)+1/(n+2)+1/(n+3)+1/(n+4)+1/(n+5))) ...: 5>10 simple odds: 1.0 better odds: 0.6456349206349207 10>15 simple odds: 0.5 better odds: 0.38926073926073923 15>20 simple odds: 0.3333333333333333 better odds: 0.27951066391468865 20>25 simple odds: 0.25 better odds: 0.21821852060982494 25>30 simple odds: 0.2 better odds: 0.1790289531668842 30>35 simple odds: 0.16666666666666666 better odds: 0.15179428809647028 35>40 simple odds: 0.14285714285714285 better odds: 0.13176161991951466 40>45 simple odds: 0.125 better odds: 0.11640507661494617 45>50 simple odds: 0.1111111111111111 better odds: 0.10425722277810291 50>55 simple odds: 0.1 better odds: 0.09440687359666272 55>60 simple odds: 0.09090909090909091 better odds: 0.08625820102565003 60>65 simple odds: 0.08333333333333333 better odds: 0.0794051061386466 65>70 simple odds: 0.07692307692307693 better odds: 0.07356123854768738 70>75 simple odds: 0.07142857142857142 better odds: 0.06851887291497556 In [5]: scale=1/69408 In [10]: from math import expm1 In [11]: cdf=lambda x: expm1(x*scale) In [12]: odds=lambda x: 0.08*cdf(x) In [18]: for i in range(1000, 70000, 1000): print(i, i*1.77, odds(i), i*1.77/odds(i)) ....: 1000 1770.0 0.0011443415070364752 1546741.0638488552 2000 3540.0 0.0022723140455138693 1557883.2542926308 3000 5310.0 0.0033841517615590512 1569078.5680231215 4000 7080.0 0.004480085452009743 1580326.9995271962 5000 8850.0 0.0055603426123236556 1591628.5410876153 6000 10620.0 0.006625147483802312 1602983.1827841753 7000 12390.0 0.007674721100139371 1614390.9124951784 8000 14160.0 0.008709281333303119 1625851.7158992288 9000 15930.0 0.009729042938762636 1637365.5764773528 10000 17700.0 0.010734217600067033 1648932.4755154457 11000 19470.0 0.011725013972787031 1660552.3921070423 12000 21240.0 0.012701637727827973 1672225.3031564078 13000 23010.0 0.013664291594123273 1683951.183381956 14000 24780.0 0.014613175400717188 1695730.005319983 15000 26550.0 0.015548486118245598 1707561.7393287257 16000 28320.0 0.016470417899823463 1719446.353592737 17000 30090.0 0.01737916212134738 1731383.8141275805 18000 31860.0 0.018274907421221682 1743374.0847848384 19000 33630.0 0.019157839739516246 1755417.1272574384 20000 35400.0 0.020028142356564204 1767512.9010852913 21000 37170.0 0.020885995931007532 1779661.3636612413 22000 38940.0 0.021731578537298422 1791862.4702373256 23000 40710.0 0.022565065702664228 1804116.1739313449 24000 42480.0 0.02338663044354366 1816422.4257337356 25000 44250.0 0.02419644330150176 1828781.1745147523 26000 46020.0 0.024994672378631195 1841192.3670319472 27000 47790.0 0.025781483372447102 1853655.9479379528 28000 49560.0 0.0265570396102828 1866171.8597885633 29000 51330.0 0.027321502083193547 1878740.0430511087 30000 53100.0 0.028075029479375246 1891360.4361131247 31000 54870.0 0.028817778217105197 1904032.9752913131 32000 56640.0 0.02954990247721161 1916757.594840789 33000 58410.0 0.03027155423507867 1929534.226964617 34000 60180.0 0.030982883292193817 1942362.8018236263 35000 61950.0 0.03168403730724374 1955243.247546509 36000 63720.0 0.03237516182676557 1968175.4902401958 37000 65490.0 0.03305640031535967 1981159.4540005026 38000 67260.0 0.03372789418547015 1994195.0609230553 39000 69030.0 0.034389782826739525 2007282.2311144758 40000 70800.0 0.0350422036349434 2020420.8827038386 41000 72570.0 0.03568529204051125 2033610.9318543866 42000 74340.0 0.036319181536639274 2046852.2927755082 43000 76110.0 0.03694400370700114 2060144.877734966 44000 77880.0 0.03755988825306223 2073488.597071385 45000 79650.0 0.03816696302100332 2086883.35920698 46000 81420.0 0.038765354028259036 2100329.0706605366 47000 83190.0 0.039355185489676765 2113825.6360606286 48000 84960.0 0.039936579843301276 2127372.958159077 49000 86730.0 0.04050965777579068 2140970.9378446406 50000 88500.0 0.04107453824746865 2154619.4741569394 51000 90270.0 0.041631338517018425 2168318.4643006045 52000 92040.0 0.04218017416582352 2182067.803659649 53000 93810.0 0.042721159121960256 2195867.3858120623 54000 95580.0 0.04325440568384712 2209717.1025446155 55000 97350.0 0.04378002454355584 2223616.8438678808 56000 99120.0 0.04429812480978898 2237566.4980314584 57000 100890.0 0.04480881403052891 2251565.9515394038 58000 102660.0 0.04531219821536273 2265615.0891658566 59000 104430.0 0.04580838185748791 2279713.793970867 60000 106200.0 0.04629746795540313 2293861.9473164077 61000 107970.0 0.04677955803428887 2308059.428882574 62000 109740.0 0.04725475216708211 2322306.116683973 63000 111510.0 0.04772314899524966 2336601.8870862788 64000 113280.0 0.048184845749264266 2350946.614822974 65000 115050.0 0.04863993826878782 2365340.173012255 66000 116820.0 0.04908852102256597 2379782.4331741 67000 118590.0 0.04953068712803802 2394273.265247502 68000 120360.0 0.04996652837066635 2408812.537607861 69000 122130.0 0.05039613522298947 2423400.1170845204[/code] Edit: At 260e6 I got around 2040 seconds per curve, resulting in the following analysis: [code]In [27]: scale=1/42017 In [28]: cdf=lambda x: expm1(x*scale) In [29]: odds=lambda x: 0.08625820102565003*cdf(x) In [30]: workpercurve=2040/3600 In [32]: for i in range(1000, 45000, 1000): print(i, i*workpercurve, odds(i), i*workpercurve/odds(i)) ....: 1000 566.6666666666666 0.0020286985793124416 279325.2149162144 2000 1133.3333333333333 0.004009684386079575 282649.0127921109 3000 1700.0 0.005944079572545787 285998.8631127809 4000 2266.6666666666665 0.007832979899161562 289374.7585014599 5000 2833.3333333333335 0.009677455355289617 292776.6886348545 6000 3400.0 0.01147855076531271 296204.6402473156 7000 3966.6666666666665 0.013237286380486461 299658.5971361975 8000 4533.333333333333 0.014954658456872423 303138.5401684006 9000 5100.0 0.01663163981967883 306644.4472880898 10000 5666.666666666667 0.018269180414328623 310176.2935255852 11000 6233.333333333333 0.01986820784456702 313734.05100741604 12000 6800.0 0.021429627897913313 317317.68896753184 13000 7366.666666666666 0.02295432505875468 320927.17375965934 14000 7933.333333333333 0.024443163009372527 324562.4688707989 15000 8500.0 0.025896985119185326 328223.53493584564 16000 9066.666666666666 0.027316614922484873 331910.32975332916 17000 9633.333333333334 0.028702856584936844 335622.80830225354 18000 10200.0 0.030056495359109654 339360.9227600296 19000 10766.666666666666 0.031378298029289875 343124.62252148247 20000 11333.333333333334 0.032669013345835995 346913.85421892104 21000 11900.0 0.033929372449316694 350728.5617432531 22000 12466.666666666666 0.03516008928467388 354568.6862661304 23000 13033.333333333332 0.03636186100564498 358434.1662631068 24000 13600.0 0.037535368369673756 362324.93753779045 25000 14166.666666666666 0.03868127612353318 366240.93324697355 26000 14733.333333333332 0.0398002333798789 370182.083926719 27000 15300.0 0.04089287398494662 374148.31751938484 28000 15866.666666666666 0.04195981687760157 378139.55940156634 29000 16433.333333333332 0.04300166643994358 382155.7324129342 30000 17000.0 0.044019012839666284 386196.7568859475 31000 17566.666666666668 0.045012432364364405 390262.55067641946 32000 18133.333333333332 0.04598248774797851 394353.02919491375 33000 18700.0 0.04692972848956215 398468.10543894686 34000 19266.666666666668 0.04785469116455192 402607.69002597465 35000 19833.333333333332 0.04875789972871677 406771.691227138 36000 20400.0 0.04963986581495888 410960.01500174275 37000 20966.666666666668 0.0505010890231339 415172.56503245124 38000 21533.333333333332 0.051342057203055125 419409.2427611565 39000 22100.0 0.05216324673084163 423669.94742551807 40000 22666.666666666668 0.052965122778767 427954.5760961292 41000 23233.333333333332 0.05374813957876154 432263.023714293 42000 23800.0 0.05451274067971714 436595.18313037965 43000 24366.666666666664 0.05525935919874078 440950.94514273555 44000 24933.333333333332 0.05598841806649855 445330.1985371246[/code] If you ignore the fact that I confused the labels "scale" and "rate", this suggests we should be doing only 8K16K curves at the t60 level. Of course, this is all assuming that my thread hours and fivemack's GNFS thread hours are comparable. There's obviously a fair bit of wiggle room that depends on precisely how accurate our GNFS work estimate is. Edit2: Of course, this doesn't include the odds that the prior ECM, presumably only run to one tlevel, still left a factor of less than the considered size. That would change the crossover here a bit I think. (Unless that's counterbalanced by the odds that said same prior ECM finds a factor larger than its "meant" to?) Edit3: These were all run with maxmem 1500, which should only affect the 850e6 estimate I think. 
All times are UTC. The time now is 10:58. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2022, Jelsoft Enterprises Ltd.