![]() |
[QUOTE=frmky;186806]I've done 4000 curves at B1=11M on the current C132. I'm moving on to other projects, so all y'all can take over from here.[/QUOTE]
That's enough ECM; does someone want to do it, or shall it be a team sieve? |
We are now less than 100 away from [URL="http://factorization.ath.cx/search.php?se=1&aq=314718&action=last20"]314718[/URL] reaching 9000 indices.
|
[QUOTE=10metreh;186827]That's enough ECM; does someone want to do it, or shall it be a team sieve?[/QUOTE]
Looks like it'll be a team sieve then. I started a poly search with msieve. |
[QUOTE=jrk;186963]Looks like it'll be a team sieve then.
I started a poly search with msieve.[/QUOTE] Here's one: [code] n: 128884548745268111272865256286343647297362141639349354777257427180583981433019147873573181228554594271870135015462926183892184639779 # norm 8.620730e-13 alpha -6.151233 e 5.416e-11 skew: 221034.75 c0: -85848422053291577310541182972672 c1: -182784784632530169810380096 c2: 9264140201154197887028 c3: 4616332621091608 c4: -130712859177 c5: 288420 Y0: -13490760014661274202566223 Y1: 452441599659677 rlim: 10000000 alim: 10000000 lpbr: 27 lpba: 27 mfbr: 54 mfba: 54 rlambda: 2.5 alambda: 2.5 [/code] def-nm-params.txt suggests that a good score is 4.51e-11 |
A bit better:
[code] n: 128884548745268111272865256286343647297362141639349354777257427180583981433019147873573181228554594271870135015462926183892184639779 # norm 9.962231e-13 alpha -5.919298 e 5.909e-11 skew: 94066.60 c0: -2859035759779303395827765570880 c1: 35371317901777264650071124 c2: 3858856143107184029879 c3: 5211231669097662 c4: -481062970300 c5: 581640 Y0: -11724987796918835499253147 Y1: 502923181106423 rlim: 10000000 alim: 10000000 lpbr: 27 lpba: 27 mfbr: 54 mfba: 54 rlambda: 2.5 alambda: 2.5[/code] |
[QUOTE=jrk;186976]def-nm-params.txt suggests that a good score is 4.51e-11[/QUOTE]
That isn't actually a "good score". It's just a (bad) name for "below this bound, polynomials will not be saved". Is the poly search finished yet? |
Another slight improvement:
[code] n: 128884548745268111272865256286343647297362141639349354777257427180583981433019147873573181228554594271870135015462926183892184639779 # norm 9.990093e-13 alpha -6.530119 e 5.917e-11 skew: 125304.96 c0: -3154210280231927135107173799541 c1: 2852258150863533245214823 c2: 9742539035261862224357 c3: 31979892546453095 c4: -390636665178 c5: 743640 Y0: -11162726022381847086623940 Y1: 646588654726421 rlim: 10000000 alim: 10000000 lpbr: 27 lpba: 27 mfbr: 54 mfba: 54 rlambda: 2.5 alambda: 2.5 [/code] [quote="10metreh"]Is the poly search finished yet? [/quote] Yes. I ran msieve for the suggested 19 cpu hours. I'd suggest using siever 13e and starting at Q=5M up to 15M. |
[QUOTE=jrk;187081]Another slight improvement:
[code] n: 128884548745268111272865256286343647297362141639349354777257427180583981433019147873573181228554594271870135015462926183892184639779 # norm 9.990093e-13 alpha -6.530119 e 5.917e-11 skew: 125304.96 c0: -3154210280231927135107173799541 c1: 2852258150863533245214823 c2: 9742539035261862224357 c3: 31979892546453095 c4: -390636665178 c5: 743640 Y0: -11162726022381847086623940 Y1: 646588654726421 rlim: 10000000 alim: 10000000 lpbr: 27 lpba: 27 mfbr: 54 mfba: 54 rlambda: 2.5 alambda: 2.5 [/code] Yes. I ran msieve for the suggested 19 cpu hours. I'd suggest using siever 13e and starting at Q=5M up to 15M.[/QUOTE] Have you test-sieved for those parameters or do you think it won't matter? (I notice that the very first team sieve from 4788, also a c132, had alim and rlim of 5.4M. def-par.txt suggests 11M or 12M.) |
[QUOTE=10metreh;187101]Have you test-sieved for those parameters or do you think it won't matter? (I notice that the very first team sieve from 4788, also a c132, had alim and rlim of 5.4M. def-par.txt suggests 11M or 12M.)[/QUOTE]
The alim & rlim were mostly a guess, but I have just made some tests. alim & rlim = 10M: [code]$ for i in `seq 2000000 2000000 18000000`; do ~/ggnfs/trunk/bin/gnfs-lasieve4I13e -a test.poly -f $i -c 500; done Warning: lowering FB_bound to 1999999. total yield: 999, q=2000503 (0.02309 sec/rel) Warning: lowering FB_bound to 3999999. total yield: 462, q=4000511 (0.02297 sec/rel) Warning: lowering FB_bound to 5999999. total yield: 1231, q=6000503 (0.02399 sec/rel) Warning: lowering FB_bound to 7999999. total yield: 995, q=8000507 (0.02666 sec/rel) total yield: 584, q=10000511 (0.02642 sec/rel) total yield: 751, q=12000509 (0.02696 sec/rel) total yield: 879, q=14000507 (0.02965 sec/rel) total yield: 714, q=16000507 (0.03031 sec/rel) total yield: 402, q=18000527 (0.03308 sec/rel) [/code] 5M: [code] $ for i in `seq 2000000 2000000 14000000`; do ~/ggnfs/trunk/bin/gnfs-lasieve4I13e -a test.poly -f $i -c 500; done Warning: lowering FB_bound to 1999999. total yield: 950, q=2000503 (0.02040 sec/rel) Warning: lowering FB_bound to 3999999. total yield: 439, q=4000511 (0.02055 sec/rel) total yield: 1076, q=6000503 (0.02257 sec/rel) total yield: 804, q=8000507 (0.02567 sec/rel) total yield: 449, q=10000511 (0.02523 sec/rel) total yield: 569, q=12000509 (0.02624 sec/rel) total yield: 658, q=14000507 (0.02894 sec/rel) [/code] 6M: [code]$ for i in `seq 2000000 2000000 14000000`; do ~/ggnfs/trunk/bin/gnfs-lasieve4I13e -a test.poly -f $i -c 500; done Warning: lowering FB_bound to 1999999. total yield: 964, q=2000503 (0.02082 sec/rel) Warning: lowering FB_bound to 3999999. total yield: 445, q=4000511 (0.02112 sec/rel) total yield: 1173, q=6000503 (0.02233 sec/rel) total yield: 876, q=8000507 (0.02527 sec/rel) total yield: 490, q=10000511 (0.02496 sec/rel) total yield: 623, q=12000509 (0.02597 sec/rel) total yield: 713, q=14000507 (0.02872 sec/rel) [/code] 8M: [code] $ for i in `seq 2000000 2000000 14000000`; do ~/ggnfs/trunk/bin/gnfs-lasieve4I13e -a test.poly -f $i -c 500; done Warning: lowering FB_bound to 1999999. total yield: 986, q=2000503 (0.02177 sec/rel) Warning: lowering FB_bound to 3999999. total yield: 457, q=4000511 (0.02182 sec/rel) Warning: lowering FB_bound to 5999999. total yield: 1211, q=6000503 (0.02306 sec/rel) total yield: 972, q=8000507 (0.02584 sec/rel) total yield: 548, q=10000511 (0.02526 sec/rel) total yield: 708, q=12000509 (0.02564 sec/rel) total yield: 800, q=14000507 (0.02899 sec/rel) [/code] So, looks like the fastest is alim & rlim = 6M, and sieving from q=2M to about q=13M to reach a target of 17M relations. Here's 27-bit vs 28-bit: 27-bit [code] $ ~/ggnfs/trunk/bin/gnfs-lasieve4I13e -a test.poly -f 6000000 -c 2000 total yield: 3406, q=6002033 (0.02153 sec/rel) [/code] 28-bit [code] $ ~/ggnfs/trunk/bin/gnfs-lasieve4I13e -a test.poly -f 6000000 -c 2000 total yield: 6420, q=6002033 (0.01175 sec/rel) [/code] Not much difference, and 27-bit makes smaller files so 27-bit is better. So with the changed alim & rlim, here's a new file: [code] n: 128884548745268111272865256286343647297362141639349354777257427180583981433019147873573181228554594271870135015462926183892184639779 # norm 9.990093e-13 alpha -6.530119 e 5.917e-11 skew: 125304.96 c0: -3154210280231927135107173799541 c1: 2852258150863533245214823 c2: 9742539035261862224357 c3: 31979892546453095 c4: -390636665178 c5: 743640 Y0: -11162726022381847086623940 Y1: 646588654726421 rlim: 6000000 alim: 6000000 lpbr: 27 lpba: 27 mfbr: 54 mfba: 54 rlambda: 2.5 alambda: 2.5 [/code] |
If a mod would please start a thread for a team sieve, I will go ahead and reserve:
reserving 12M to 13M reserving line sieving from b=1 to 500 |
[quote=jrk;187191]If a mod would please start a thread for a team sieve, I will go ahead and reserve:
reserving 12M to 13M reserving line sieving from b=1 to 500[/quote] The FTP server is open for business again, this time in directory c132-relations. :smile: Note: while we should be OK for this team sieve, sometime within the next few weeks we'll be starting some major restructuring of the NPLB server setup. The FTP server may be inaccessible during much of that time. I'll let you guys know when everything is all finished being set up and the server is back for good, but meanwhile I'd suggest that for the next team sieve after this, we stay off the FTP server entirely. If someone else is interested in running a temporary FTP server in the meantime, that would be great, or we could use file-sharing websites like Rapidshare. (Actually, over at the Twin Prime Search project they've been using [URL="http://www.sendspace.com"]SendSpace[/URL] for tranferring large files, and I've found that it's much faster than Rapidshare, and doesn't have all the restrictions on free users that Rapidshare does. Plus, it allows uploading of files up to 300MB, as opposed to Rapidshare's 100MB. If you guys end up needing to use file-sharing web sites, I'd recommend SendSpace over Rapidshare.) |
| All times are UTC. The time now is 22:55. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.