![]() |
[QUOTE=VBCurtis;472419]The 2.6 for the other lambda makes no sense, since you have 96 for mfbr. 3LP require lambda above 3.0.[/QUOTE]
Parameters:[CODE]n: 8095101662371927421703337019465587498085337648622133688278589711654019359923503887978141510461468343349838217540569173400647791769725685803537804186347867144149599002247585690859122186539724272741806859085719 skew: 771127364.56 Y0: -17068243492239505219994785346910834818341 Y1: 1873940548553722757 c0: 165792391853474935561243616954647727516748946250496 c1: 2160239644350504494844955872920952825447896 c2: -21514458180493538566295548810659238 c3: -5887571126475837688637761 c4: 35919796435243602 c5: 5588280 type: gnfs rlim: 800000000 alim: 800000000 lpbr: 33 lpba: 33 mfbr: 96 mfba: 96 rlambda: 2.6 alambda: 4.6[/CODE] So I did some testing using the same parameters as before, but changing rlambda. Results are below, always sieving a-side: [CODE]2k q-blocks, 16e, 33A, rlambda=4.6---total yield: 3675, q=300002029 (7.76544 sec/rel) 2k q-blocks, 16e, 33A, rlambda=3.6---total yield: 3675, q=300002029 (5.01857 sec/rel) 2k q-blocks, 16e, 33A, rlambda=3.0---total yield: 3485, q=300002029 (3.74347 sec/rel) 2k q-blocks, 16e, 33A, rlambda=2.6---total yield: 3485, q=300002029 (2.75728 sec/rel) 2k q-blocks, 16e, 33A, rlambda=2.2---total yield: 3440, q=300002029 (2.54916 sec/rel)[/CODE] Increasing rlambda increase yield a bit, but the sec/rel goes up far more. More interestingly, I could drop it down to 2.2 and still get good yield and better speed. To check that this wasn't some anomaly at q=300M, I re-ran at q=500M and got: [CODE]500M: (rlambda=2.6) total yield: 2619, q=500002003 (3.04544 sec/rel) 500M: (rlambda=2.2) total yield: 2692, q=500002003 (2.98365 sec/rel)[/CODE] So again, slightly faster and actually a little higher yield. I'm going to run a few more sieving tests to confirm, but it looks like rlambda=2.6 should work fine and may actually work better than increasing it. |
[b]QUEUED[/b] C231_133_73 is ready for SNFS on the 14e siever.
[code] n: 724944184282146882229240663426590018526898008474680939544589033560019135346408745090706239982737192362639422940806860188203492279776297847688236932095959449250288392364539580917225652478824098917284281898899070075175763450990745189 # 133^73+73^133, difficulty: 249.69, anorm: 1.97e+038, rnorm: 9.36e+046 # scaled difficulty: 251.13, suggest sieving rational side # size = 2.182e-012, alpha = 0.000, combined = 2.131e-013, rroots = 0 type: snfs size: 249 skew: 1.1052 c6: 73 c0: 133 Y1: -30635127461052805121505361 Y0: 98424433237708439716398638596388483974129 rlim: 134000000 alim: 134000000 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.7 alambda: 2.7 [/code] Test sieving on the -r side with Q in blocks of 2K [code] 20M 3469 80M 2269 150M 2061 250M 1748 [/code] Suggesting a range of 20M-240M for Q with a target # rels = 250M. |
[QUOTE=wombatman;472583]
Increasing rlambda increase yield a bit, but the sec/rel goes up far more. More interestingly, I could drop it down to 2.2 and still get good yield and better speed. To check that this wasn't some anomaly at q=300M, I re-ran at q=500M and got: [CODE]500M: (rlambda=2.6) total yield: 2619, q=500002003 (3.04544 sec/rel) 500M: (rlambda=2.2) total yield: 2692, q=500002003 (2.98365 sec/rel)[/CODE] So again, slightly faster and actually a little higher yield. I'm going to run a few more sieving tests to confirm, but it looks like rlambda=2.6 should work fine and may actually work better than increasing it.[/QUOTE] What this tells you is that 3 large primes should not be used on the r side. That is, mfbr should be 66 rather than 96. Using lambda below 3 means that you're actually only searching for 2-large-prime relations. If you leave rlambda at 2.6 and test mfbr of 65, 66 you should see almost exactly the name yield. |
[QUOTE=VBCurtis;472620]What this tells you is that 3 large primes should not be used on the r side. That is, mfbr should be 66 rather than 96. Using lambda below 3 means that you're actually only searching for 2-large-prime relations.
If you leave rlambda at 2.6 and test mfbr of 65, 66 you should see almost exactly the name yield.[/QUOTE] Mk. I'll check that as well. |
[QUOTE=wombatman;472624]Mk. I'll check that as well.[/QUOTE]
Set rlambda=2.6 and mfbr=64 and got this: [CODE]total yield: 2002, q=200002007 (3.54419 sec/rel)[/CODE] With rlambda=2.2 and mfbr=96, I get: [CODE]total yield: 2739, q=200002007 (2.45979 sec/rel)[/CODE] All other parameters are the same and all sieving was done on the algebraic side. |
I think 64 is too small to pair with 33-bit large primes. That's why I suggested 65 and 66.
|
[QUOTE=VBCurtis;472648]I think 64 is too small to pair with 33-bit large primes. That's why I suggested 65 and 66.[/QUOTE]
Ok. I've made the change and will test with 66. Thanks for all your advice thus far.:smile: |
Tested with mfbr=64-66 and lpbr=32:
[CODE]32A, rlambda=2.6, mfbr=64--total yield: 2002, q=200002007 (3.54419 sec/rel) 32A, rlambda=2.6, mfbr=65--total yield: 2002, q=200002007 (3.24379 sec/rel) 32A, rlambda=2.6, mfbr=66--total yield: 2002, q=200002007 (3.25064 sec/rel)[/CODE] Lastly, with lpbr=33 (33A), rlambda=2.6, and mfbr=65: [CODE]total yield: 2737, q=200002007 (2.90922 sec/rel)[/CODE] So I dunno, best yield by far is with lpba/r set to 33. Any idea why this might be? |
[QUOTE=wombatman;472725]
Lastly, with lpbr=33 (33A), rlambda=2.6, and mfbr=65: [CODE]total yield: 2737, q=200002007 (2.90922 sec/rel)[/CODE] So I dunno, best yield by far is with lpba/r set to 33. Any idea why this might be?[/QUOTE] This! This is the result I was expecting, for yield anyway. When you set mfbr to 96, you get 2739 relations. When you set it to 65, you get 2737 relations. So, whatever factorizations lasieve is trying to do for cofactors between 65 and 96 bits, it found only two relations. However, I don't understand why the sec/rel would be worse for 65 than 96; it's finding 99.9% of the relations while testing fewer cofactors. That *should* result in a faster time. 33-bit large primes are clearly superior to 32 for an input this size. 34-bit is almost certainly superior to 33, but the standard tools don't allow 34LP. Any increase in lpba/r will result in more relations, on any input; however, more relations will be needed to build a matrix (generally, 65-70% more relations are needed for each 1-bit increase in both lpba/r). So, when comparing 32 vs 33, you want yield to be at least 70% greater for 33. mfbr denotes the cofactor size lasieve tries to split. lbpr denotes the size of the largest prime acceptable in a relation. So, using 64 and 32 means that 64-bit cofactors are split, and any that result in 32+32 bit primes are retained; however, a split that produces 31+33 is rejected because one prime is too large. Using 65 and 32 means you're trying to split some 65-bit cofactors, but you only keep the ones that split as 32-32 or smaller; that's not possible for a 65 bit input, so no extra relations are found. Time is gained sometimes by using mfbr = 2* lbpr -1, say 33 and 65, because more of the 65-bit splits will have both factors 33-bits or smaller, while lots of 66-bit cofactors will split as 34 and 32 (or 35 and 31...). Hope this helps! |
[QUOTE=VBCurtis;472754]This! This is the result I was expecting, for yield anyway. When you set mfbr to 96, you get 2739 relations. When you set it to 65, you get 2737 relations. So, whatever factorizations lasieve is trying to do for cofactors between 65 and 96 bits, it found only two relations.
However, I don't understand why the sec/rel would be worse for 65 than 96; it's finding 99.9% of the relations while testing fewer cofactors. That *should* result in a faster time. 33-bit large primes are clearly superior to 32 for an input this size. 34-bit is almost certainly superior to 33, but the standard tools don't allow 34LP. Any increase in lpba/r will result in more relations, on any input; however, more relations will be needed to build a matrix (generally, 65-70% more relations are needed for each 1-bit increase in both lpba/r). So, when comparing 32 vs 33, you want yield to be at least 70% greater for 33. mfbr denotes the cofactor size lasieve tries to split. lbpr denotes the size of the largest prime acceptable in a relation. So, using 64 and 32 means that 64-bit cofactors are split, and any that result in 32+32 bit primes are retained; however, a split that produces 31+33 is rejected because one prime is too large. Using 65 and 32 means you're trying to split some 65-bit cofactors, but you only keep the ones that split as 32-32 or smaller; that's not possible for a 65 bit input, so no extra relations are found. Time is gained sometimes by using mfbr = 2* lbpr -1, say 33 and 65, because more of the 65-bit splits will have both factors 33-bits or smaller, while lots of 66-bit cofactors will split as 34 and 32 (or 35 and 31...). Hope this helps![/QUOTE] This is very helpful and gives me a better understanding of how the lpba/r and mfbr/a parameters work together. I wouldn't put too much stock into the reported time. The computer the sieving is being done has other tasks running as well. If I wanted to get more precise timings, I would need to average 3 or so runs. I'll review all the yields I have and see whether I hit that 70% threshold you recommend. Then I should be able to finally submit it to frmky for the 16e queue. Thanks! :smile: |
There's no doubt in my mind that you want 33LP over 32; I'm pretty confident 34LP would be faster, and I would test 35 if I were running this factorization myself. LP bounds above 33 require non-standard sievers, either 16f, or the special 16e compilation floating around the forum that has the 33-bit LP bound removed.
16e is limited by 96 for mfbr/a in any case, so 3 large primes is limited to 33/96 on any 16e siever. For the 2LP side, 34/67 and 34/68 would be interesting to test; maybe I'll try that on your composite tonight, as I have some free time. |
| All times are UTC. The time now is 23:15. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.