![]() |
[QUOTE=jyb;499281]However, my test sieving suggests that these jobs should complete quickly and without any problems. Is that a fair statement?
And just for my edification, if the limits had been set to more common (i.e. larger) values for 30-bit jobs, would the following statements be correct? 1) For a given range of spec-Q, sieving would give a higher yield. 2) For a given range of spec-Q, sieving would take a little bit more time. 3) Post-processing would take longer. I assume that the gains from #1 would outweigh the losses from #2, so over all sieving would be faster. OTOH, with a small job like this perhaps it's worth slightly longer sieving if it means shorter post-processing. Thoughts?[/QUOTE] The jobs should indeed complete without any problems. My experience with your statements: 1) Yep. Higher lim's = higher yield. 2) Likely not; I choose lim's to minimize total sieving time. If we assume the "usual" lim choices are fastest, then running a job at half these usual lims is likely a fair bit slower (say, 5-15% of total job time). However, I'm not convinced the "usual" lim choices are fastest, so your choices might not be slower. EDIT: you said "for a given spQ range." Even if two sets of lim's have the same sec/rel time, trivially the set that produces more relations per Q will take more time over the same spQ range. However, we wouldn't need to sieve the same range.... 3) I thought it was true that for a given job, higher lim's -> larger matrix, but it seems that effect is quite small compared to the amount of oversieving. Jobs vary so much in the fraction of raw relations that are unique that matrices also vary quite a lot even for two jobs of the same difficulty and same number of raw relations gathered, obfuscating the effect of lim choice. I've advocated for larger LP than the old standards, and your test sieve showed high yield throughout the necessary sieve range, so the parameter choices are fine. That is, there is no harm in leaving everything standard for 29LP, yet using 30LP. I think the main danger with very small lim's is that yield tends to fall off sharply when sieving Q higher than 2 * lim. Test-sieving will show how bad that effect is, but the idea is that sometimes things happen (such as bad data or a cheater fouling a range of spQ) that force us to sieve 20-25% higher Q than expected; using looser lim's means yield will usually stay useful up there. |
[QUOTE=VBCurtis;499285]The jobs should indeed complete without any problems.
My experience with your statements: 1) Yep. Higher lim's = higher yield. 2) Likely not; I choose lim's to minimize total sieving time. If we assume the "usual" lim choices are fastest, then running a job at half these usual lims is likely a fair bit slower (say, 5-15% of total job time). However, I'm not convinced the "usual" lim choices are fastest, so your choices might not be slower. EDIT: you said "for a given spQ range." Even if two sets of lim's have the same sec/rel time, trivially the set that produces more relations per Q will take more time over the same spQ range. However, we wouldn't need to sieve the same range.... 3) I thought it was true that for a given job, higher lim's -> larger matrix, but it seems that effect is quite small compared to the amount of oversieving. Jobs vary so much in the fraction of raw relations that are unique that matrices also vary quite a lot even for two jobs of the same difficulty and same number of raw relations gathered, obfuscating the effect of lim choice. I've advocated for larger LP than the old standards, and your test sieve showed high yield throughout the necessary sieve range, so the parameter choices are fine. That is, there is no harm in leaving everything standard for 29LP, yet using 30LP. I think the main danger with very small lim's is that yield tends to fall off sharply when sieving Q higher than 2 * lim. Test-sieving will show how bad that effect is, but the idea is that sometimes things happen (such as bad data or a cheater fouling a range of spQ) that force us to sieve 20-25% higher Q than expected; using looser lim's means yield will usually stay useful up there.[/QUOTE] Thank you for the comprehensive response. With regard to #2, as you said in your edit I was talking about the time for a given range of special-Q. And quite apart from the increased number of relations, my understanding of sieving (which could be wrong) is that a larger factor base means more primes/prime powers that have to be processed, so it will be slower than a smaller FB. As I said, I would expect that effect to be more than offset by the gains made in having a higher yield. |
[B]QUEUED AS C157_159978_10275[/B]
C157 from alq sequence 159978:10275 [CODE]n: 8081285760208310204291706443628266591168613003567359363504960559087346241954627417585003114058953502037632067209601867091762546813378282164264065807923255339 skew: 3070217.79 c0: 175947466117145166193092360466457185125 c1: -508125547799747739159066433748748 c2: 230984168919653186908140577 c3: 202428535974405957022 c4: -3060871737188 c5: 762960 Y0: -1603230175384897239642052052968 Y1: 641890352689791023 rlim: 33500000 alim: 33500000 lpbr: 29 lpba: 29 mfbr: 58 mfba: 58 rlambda: 2.6 alambda: 2.6 type: gnfs [/CODE] Suggested sieving range 10M-45M. |
So after all that talk about the low factor base limits for 7_355m2_355, and how they would be okay, something definitely seems to be wrong with its sieving. The yield appears to be way less than the test sieving suggested it should be. But I don't think it has anything to do with those limits. The server is showing 0% of the work units have been pushed, which makes no sense to me. What exactly is going on with this number on the server?
Is this something that can be looked at by any of the queue managers, or does Greg have to kick something on the server? |
[QUOTE=jyb;499417]So after all that talk about the low factor base limits for 7_355m2_355, and how they would be okay, something definitely seems to be wrong with its sieving. The yield appears to be way less than the test sieving suggested it should be. But I don't think it has anything to do with those limits. The server is showing 0% of the work units have been pushed, which makes no sense to me. What exactly is going on with this number on the server?
Is this something that can be looked at by any of the queue managers, or does Greg have to kick something on the server?[/QUOTE] Think it’s a Greg issue. I noticed the 0% pushed issued right away but it seems to have generated relations. Don’t see anything wrong with the poly, and I did remember to add the line “lss: 0” for sieving on the -a side. The siever is now calling for max Q = 126M but that may be erroneous. Suggest we wait for the job to finish and someone try to postprocess it. |
[B]QUEUED AS 6_323plus5_323[/B]
SNFS C189 HCN (6+5,323), ECM to t55. For 14e. [code] n: 334841807936993419815166147191106280656257579861470657177324718740176325691199596325696738131059961418662863319905098185654890948048260438026322102300540217401434384222408072466971654420059 # 6^323+5^323, difficulty: 252.82, skewness: 1.03, alpha: 0.00 skew: 1.031 c6: 5 c0: 6 Y1: -55511151231257827021181583404541015625 Y0: 1047532535594334222593508922191671036215296 rlim: 134000000 alim: 134000000 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.6 alambda: 2.6 [/code] Trial sieving 5K blocks: [code] Q Yield --- ----- 20M 7931 60M 6621 100M 6142 140M 5271 180M 4692 220M 4519 260M 4629 [/code] Recommend sieving special Q on rational side, 20M-250M. |
[B]QUEUED AS C162_11040_10096[/B]
C162 from alq sequence 11040:10096 is ready for nfs on 14e. [CODE]n: 840262227876293579886574557266492685831186567489001894166659103037154601214059790428540602347015436123839432261708377101647050408251354435017240871972779082675319 skew: 2872635.03 c0: 54825813603411271742102226828295462179 c1: -84373934389489842750879873636257 c2: -14627911453786129965178136 c3: 37526809733805697447 c4: 3912899003247 c5: 287820 Y0: -19636259397913087451700545099780 Y1: 231426362666241001 rlim: 67000000 alim: 67000000 lpbr: 30 lpba: 30 mfbr: 60 mfba: 60 rlambda: 2.6 alambda: 2.6 type: gnfs [/CODE] Suggesting sieve range 20M-90M. I'll take LA. |
[B]QUEUED AS 3109_61m1[/B]
C207 from the MWRB file with OPN weight 2578. [CODE]n: 197103660669395653858912094640744101201235700706589229524929736924897933443162855407929724202977179654206774116494201399945859219091696296204383126635688421371232492950162836173550272589626776986795773240771 # 3109^61-1, difficulty: 213.05, skewness: 0.20, alpha: 0.00 # cost: 4.3141e+17, est. time: 205.43 GHz days (not accurate yet!) skew: 0.200 c5: 3109 c0: -1 Y1: -1 Y0: 815546380333893232173936367759057305251281 type: snfs rlim: 16500000 alim: 33500000 lpbr: 29 lpba: 29 mfbr: 58 mfba: 58 rlambda: 2.5 alambda: 2.5[/CODE] Trial sieving 5K blocks. [CODE] Q Yield 12M 9539 20M 9212 40M 7536 50M 6717[/CODE] |
[B]QUEUED AS 12m7_777L[/B]
SNFS-242.1 C210 HCN (12-7,777L), ECM to t55. For 14e. [code] n: 116732847208888762646963427290110267287854941721333595770790829133614026002568170638886140213765182020681359615445968146346601070424615914653950041009308053488824075764211539201865011370223962513279217531457507 skew: 1.3093 c6: 343 c5: -2058 c4: 2352 c3: 7056 c2: -18144 c1: 12096 c0: -1728 Y1: 303476585554067369725285564806070272 Y0: -8505622518383218065593705986567993605799 rlim: 134000000 alim: 134000000 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.6 alambda: 2.6 [/code] Trial sieving 5K blocks: [code] Q Yield --- ----- 20M 10613 60M 8269 100M 7741 140M 6315 180M 5701 220M 5600 260M 5444 [/code] Recommend sieving special Q on rational side, 20M-180M. It's probably worth normalizing the naming of HCNs in the queue, since it will get a little more complicated with Aurifeuillian factors like this one. If you're trying to avoid characters that don't go well in filenames, it could be something like these examples: 7m2_355 6p5_329 12m7_777L 12m7_777M |
[B]QUEUED AS 12m7_777M[/B]
SNFS-242.1 C216 HCN (12-7,777M), ECM to t55. For 14e. [code] n: 187356213175485563572087491162748918302681858467960231508450671807836347082558073395119158974499741707074447491578279478766847980388353841950999327822563484845429930836444946763187092370205109735543634093382332759171 skew: 1.3093 c6: 343 c5: 2058 c4: 2352 c3: -7056 c2: -18144 c1: -12096 c0: -1728 Y1: 303476585554067369725285564806070272 Y0: -8505622518383218065593705986567993605799 rlim: 134000000 alim: 134000000 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.6 alambda: 2.6 [/code] Trial sieving 5K blocks: [code] Q Yield --- ----- 20M 10742 60M 8293 100M 7699 140M 6396 180M 5720 220M 5524 260M 5382 [/code] Recommend sieving special Q on rational side, 20M-180M. |
[B]QUEUED AS 462426329856656881_13m1[/B]
C211 from the MWRB file with OPN weight 2485. [CODE]n: 7354739551459221255300095574985904918730650549192434057436557077055622486341541854304749204844237794843430598329362031754639333388124614048687377547005946482339053511535406218115999410941986198485937869504498081 # 462426329856656881^13-1, difficulty: 211.98, skewness: 1.00, alpha: 3.10 # cost: 3.94278e+17, est. time: 187.75 GHz days (not accurate yet!) skew: 1.000 c6: 1 c5: 1 c4: -5 c3: -4 c2: 6 c1: 3 c0: -1 Y1: -462426329856656881 Y0: 213838110544697635120700709764648162 type: snfs rlim: 16500000 alim: 33500000 lpbr: 29 lpba: 29 mfbr: 58 mfba: 58 rlambda: 2.5 alambda: 2.5[/CODE] Trial sieving 5K blocks. Note: Request starting Q0 @ 12M. [CODE] Q Yield 12M 14166 20M 12472 40M 9124[/CODE] |
| All times are UTC. The time now is 23:10. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.