![]() |
[QUOTE=VBCurtis;474728]Max, can you explain why CADO "optimized" my 9.71 poly into one with a lower score? Is there still a difference between msieve's murphy score and CADO's?[/QUOTE]
Sure I can. Your polys with c5: 3491640 are seen by CADO as: [code] # lognorm: 61.92, alpha: -7.37 (proj: -2.14), E: 54.54, nr: 5 # MurphyE(Bf=10000000,Bg=5000000,area=1.000e+16)=9.50e-15 [/code]and [code] # lognorm: 62.94, alpha: -8.42 (proj: -2.34), E: 54.53, nr: 5 # MurphyE(Bf=10000000,Bg=5000000,area=1.000e+16)=9.70e-15 [/code]After optimization it becomes: [code] # lognorm: 62.95, alpha: -8.48 (proj: -2.34), E: 54.47, nr: 5 # MurphyE(Bf=10000000,Bg=5000000,area=1.000e+16)=9.69e-15 [/code]During optimization CADO looks only at E=lognorm+alpha. E=54.47 is the smallest value, optimization stops there. MurphyE is calculated only once, at the very end. And we all know that the highest MurphyE doesn't guarantee the best poly, it only predicts a pretty good poly. Meaning we should test-sieve the last CADO poly just in case. |
[QUOTE=VBCurtis;474704]I don't expect to find anything higher, but I do want to finish the search from 0-9M. Looks like we've got 5 polys to test-sieve before we ask NFS@home for 15e sieving.
I'm using 2.5e26 for stage 2 norm, and still getting more hits per day than I really want to root-opt. I've got stage 1 at 3-3.5e28. My 750ti seems to have failed this week; I'm doing 6-9M on a Quadro 2000. I think that'll be at least another week.[/QUOTE] How is the new card working? Do you have an ETA? I’m still plowing through, though progress is painfully slow: I’m only getting ~2k/hr on my old rig. Working past 300K now. But speed is slowly increasing - should pass 1M by mid-January. So far no more flares. |
No new polys to report through 8.8M; running to 10M by the end of the week.
Should I fill in 2-3M also? |
[QUOTE=VBCurtis;477196]Should I fill in 2-3M also?[/QUOTE]
Yes please! I’m running towards 1M, eta next week, but 3M is a long ways out. No further good polys found so far. |
Another one
[code] R0: -46886185235604065408950091235920630911 R1: 1130636763778943531 A0: 102882784270194671350530481922498893122622184256 A1: 7687571179628508631737633340729505212360 A2: 60571928669067489747660553874934 A3: -1235743336477827983811701 A4: -645531889287122 A5: 1192464 skew 253176492.23, size 4.341e-019, alpha -8.048, combined = 1.013e-014 rroots = 5 [/code] |
Excellent! 4% better score. If it test-sieves 3-5% better than the previous best, I think we're done.
I'm around 2.1M on my last GPU run; I'll let it search until test-sieving shows us which poly is best. |
It was a surprising find. Currently finishing the -np1 -nps run to 2M, should finish tonight. The -npr run will take a few more days, as my current “recipe” returns a LOT of hits and takes longer to sort out than a tighter set of filters would. Lesson for the future.
That said, I do not expect another flare. Suggest proceeding with test sieving after you finish searching, meanwhile I’ll finish the -npr on the tail end of 1.5-2M. If something does pop, it’s easy enough to run a quick test sieve. |
Here I thought you'd be test-sieving! :)
If you do, I suggest 15e/33-65 or 33-66, with lims of 400M or maybe 268/400M. If we mess with 3LP or asymmetric lims, I think rlim is the side to alter for GNFS? I suggest you determine which poly is best, and then I'll mess with lims and LP bounds to make best use of the best poly. Sound good? |
[QUOTE=VBCurtis;480327]Here I thought you'd be test-sieving! :)
If you do, I suggest 15e/33-65 or 33-66, with lims of 400M or maybe 268/400M. If we mess with 3LP or asymmetric lims, I think rlim is the side to alter for GNFS? I suggest you determine which poly is best, and then I'll mess with lims and LP bounds to make best use of the best poly. Sound good?[/QUOTE] I assumed you would do all the work and I’d cheerlead from the sidelines! ;-) Of course I can determine the best poly. You are better at tweaking the parameters. And yes rlim is the larger value for asymmetric lims with GNFS. I’ll start test sieving this evening. |
The algebraic norms are usually larger for GNFS. So try test sieving 3LP on the algebraic side.
Running my test script to calculate the norms I got: [code] ~/bin> tester.pl t.poly -> __________________________________________________________ -> | This is the factMsieve.tester.pl script for GGNFS. | -> | This program is copyright 2004, Chris Monico, and subject| -> | to the terms of the GNU General Public License version 2.| -> |__________________________________________________________| This is the tester script, it just checks the poly without sieving any relations -> Starting Sun Feb 18 18:05:43 2018 -> Working with NAME=t... -> Selected default factorization parameters for degree 5 gnfs 195 digit level. -> Selected lattice siever: /home/chris/lasieve4_64/gnfs-lasieve4I16e -> Using rlim=228600000, alim=228600000, lpbr=31, lpba=31, mfbr=62, mfba=62, rlambda=2.6, alambda=2.6, qintsize 100000 -> Using calculated skew 153879046.448084 aa is 65536, bb is 32768, degree is 5, c5 is 1192464, c0 is 102882784270194671350530481922498893122622184256, Y1 is 1130636763778943531, Y0 is -46886185235604065408950091235920630911 sqr_lim is 15119.523802025, sqrt(skew) is 12404.79933123, aa is now 12291582115700.8, bb is now 39939.1028194593 c0 is 102882784270194671350530481922498893122622184256, adding 1.04552454466648e+70 c1 is 7687571179628508631737633340729505212360, adding 2.40430830141162e+71 c2 is 60571928669067489747660553874934, adding 5.83017902006348e+71 c3 is -1235743336477827983811701, adding -3.660566049276e+72 c4 is -645531889287122, adding -5.88500930048408e+71 c5 is 1192464, adding 3.34567854293277e+71 -> Algebraic norm is 3.08059514743696e+72. Rational norm is 1.8725921729509e+42. -> Algebraic difficulty is about 72.4886. Rational difficulty is about 42.2724. It's probably OK [/code] So the algebraic norm is certainly larger for this .poly. Chris |
[QUOTE=VBCurtis;480327]Here I thought you'd be test-sieving! :)
If you do, I suggest 15e/33-65 or 33-66, with lims of 400M or maybe 268/400M. If we mess with 3LP or asymmetric lims, I think rlim is the side to alter for GNFS? I suggest you determine which poly is best, and then I'll mess with lims and LP bounds to make best use of the best poly. Sound good?[/QUOTE] Reached the 2M level with no more flares. In the meantime I have run test sieving on all candidates identified to date, with one clear winner emerging (post 258): [code] n: 270190311360611008737597207914785628626394817649241238889983182977892667778914595601256771211961749590172206975699662088722691166582226058852847451219554336976835348606797148840109621091833378801 skew: 253176492.23 c0: 102882784270194671350530481922498893122622184256 c1: 7687571179628508631737633340729505212360 c2: 60571928669067489747660553874934 c3: -1235743336477827983811701 c4: -645531889287122 c5: 1192464 Y0: -46886185235604065408950091235920630911 Y1: 1130636763778943531 # size 4.341e-019, alpha -8.048, combined = 1.013e-014 rroots = 5 rlim: 536000000 alim: 536000000 lpbr: 33 lpba: 33 mfbr: 66 mfba: 66 rlambda: 3.0 alambda: 3.0 [/code] I used these same parameters for test sieving of all polynomials over 10k Q with Q0=1e8. Yield for the above poly proved highest at 1.22. It was also the fastest, over 5% faster than the next best. There’s certainly some trade space available in the parameters to improve performance. I’ve also PM’d Max hoping he can tweak the above polynomial but have not yet heard back from him. Hope he sees this thread. |
| All times are UTC. The time now is 22:26. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.