![]() |
[QUOTE=fivemack;418819]I presume he's talking about the stage-2 score (the thing printed at the end of each line in msieve.dat.ms), which is a number of the order 3e+19.
You seem to have confused it with the Murphy scores, which are of the order 3e-12 (so forty-one orders of magnitude different, which I would have expected to make them hard to confuse). The Murphy scores are roughly a probability that a value of the polynomial is sufficiently smooth; I'm not exactly sure what the stage-2 score measures.[/QUOTE] Ah. It's been many years since I've manually run a poly search -- :blush: There's still the matter of 24 CPU hours doing better than 30 GPU hours though. |
Best one I got for Alfred's C153
[CODE]R0: -176915602067131140978692131601
R1: 109404085563163 A0: -7302868581807259050515796513087069240 A1: 170933500834141158930455569890766 A2: 10928102934671700769536555 A3: -28459862608574595168 A4: -873208908832 A5: 1155960 skew 3350352.16, size 7.670e-015, alpha -7.874, combined = 3.635e-012 rroots = 5 elapsed time 04:00:06 [/CODE] Murphy score is similar, but since A5 is smaller, the skew is higher. Probably still worth a test sieve. Edit: As an experiment, I'm going to re-run this without any change in the chosen stage1 and stage2 norms to see how that affects the "best" polynomial. I'll post once it finishes. |
@ Gimarel
Thank you very much for your polynomial. Using the 14e siever and the parameters [QUOTE] rlim: 25000000 alim: 25000000 lpbr: 30 lpba: 30 mfbr: 60 mfba: 60 rlambda: 2.6 alambda: 2.6 [/QUOTE] I get [QUOTE] total yield ~ 37000 (q-range 10k with starting values from 8e6 up to 18e6) sec/rel ~ 0.0325 [/QUOTE] If my choice of parameters is bad, let me know please. |
[QUOTE=fivemack;418819]The Murphy scores are roughly a probability that a value of the polynomial is sufficiently smooth; I'm not exactly sure what the stage-2 score measures.[/QUOTE]
It's the integral of the square of the algebraic polynomial over a rectangular sieve region; lower is better, this is basically the RMS size of one sieve value. |
[QUOTE=Alfred;418827]@ Gimarel
Thank you very much for your polynomial. Using the 14e siever and the parameters I get If my choice of parameters is bad, let me know please.[/QUOTE] (Use [code] tags instead of [quote] tags.) I let Yafu choose parameters, and it gave these: [code]n: 200342680339792084251385520743186501215219346823549163194073828377349727332880101003701005574557437974308702796632658776489876756467616443528718745464543 # norm 9.635214e-15 alpha -5.731471 e 3.859e-12 rroots 5 skew: 9910102.29 c0: 32499064344333747693836052278767798335 c1: 3126905685798081746601052320840 c2: -5075345682477287373503043 c3: 90439175992181516 c4: 59077950604 c5: 420 Y0: -862373527665808095417346348354 Y1: 665033345874870149 rlim: 29200000 alim: 29200000 lpbr: 29 lpba: 29 mfbr: 58 mfba: 58 rlambda: 2.6 alambda: 2.6[/code] Over a 2K range of q right at the factor base bound: [code]nfs: commencing algebraic side lattice sieving over range: 29200000 - 29202000 gnfs-lasieve4I14e (with asm64): L1_BITS=15, SVN $Revision: 430 $ FBsize 1811797+0 (deg 5), 1811271+0 (deg 1) total yield: 3809, q=29202001 (0.04913 sec/rel) ETA 0h00m) 105 Special q, 199 reduction iterations ... nfs: found 3809 relations, need at least 44869960 (filtering ETA: 666h 10m) [/code] When I tried your parameters for 2K at the factor base bound, I get this: [code]nfs: commencing algebraic side lattice sieving over range: 25000000 - 25002000 gnfs-lasieve4I14e (with asm64): L1_BITS=15, SVN $Revision: 430 $ FBsize 1566762+0 (deg 5), 1565926+0 (deg 1) total yield: 8015, q=25002001 (0.02488 sec/rel) ETA 0h00m) 117 Special q, 221 reduction iterations ... nfs: found 8015 relations, need at least 91912198 (filtering ETA: 679h 6m)[/code] So it doesn't matter too much, but Yafu's param choice seems slightly better. |
@ wombatman
Thank you very much. [QUOTE]Murphy score is similar, but since A5 is smaller, the skew is higher. Probably still worth a test sieve.[/QUOTE] A test (with identical parameters and three starting values for q) shows [QUOTE] total yield ~ 34k compared with Gimarel's ~ 37k. [/QUOTE] |
Yield doesn't matter, as long as it's not terribly small (say, under 2.0 relations per q). speed matters, as measured by sec/rel.
Yafu is pessimistic about how many relations will be needed for 30LP at 91M. In my opinion, this is closer to 31LP than 29. I'd expect a matrix to build with 80-82M relations with the parameters Alfred chose. I'm also curious about Gimarel's find using CPU search; my experience was that it's worth using a GPU for any search 140 digits or higher, and I didn't think it was close. But I don't start from A1 = 1... wombatman- We should do some experiments with settings for the C198 requested this morning; I'm interested to try to figure out if altering stage1norm has merit in general. |
Yeah, I think it would be worthwhile. Want to make sure we're not cutting potentially good candidates by making it too small/tight. Stage 2 norm could have an effect as well. I'll run from the lowest A1 with Stage 1 norm at msieve's chosen value / 10. Same for stage 2 norm. Then I can come back and rerun it with, say, Stage 1 at default and Stage 2 / 10. Something like that what you had in mind?
|
Conversely, don't the polys that CADO produces typically have substantially higher A1s (and lower skew)?
|
Dubslow-
Both CADO and msieve have A1 as a user-controlled parameter/search range. If you mean that CADO's best poly in a range often has higher A1 than msieve's best, I'm not sure- I've only used CADO for one poly search, and I ran a really wide range (like 0-20M). |
[QUOTE=wombatman;418841]Yeah, I think it would be worthwhile. Want to make sure we're not cutting potentially good candidates by making it too small/tight. Stage 2 norm could have an effect as well. I'll run from the lowest A1 with Stage 1 norm at msieve's chosen value / 10. Same for stage 2 norm. Then I can come back and rerun it with, say, Stage 1 at default and Stage 2 / 10. Something like that what you had in mind?[/QUOTE]
Yes, though what is "better" is the issue. My measure of value is greatest number of -nps hits of norm better (sorry, lower) than {pick a number} per day. What stage1norm produces the highest rate of quality hits? Note I'm not talking about actual poly E-score; that's a crapshoot only lightly correlated with settings of stage1/stage2. setting stage2norm merely alters the output that is kept for root-opt. It's not valuable to run root-opt on thousands of candidates, so some cutoff is useful for brevity. Default stage2norm produces *thousands* of candidates per hour, not helpful. A few days of searching for the C198 should illuminate the worst stage2 size that produced a possibly-useful poly; we can set the cutoff for worthy candidates from -nps just above that number. |
| All times are UTC. The time now is 23:01. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.