![]() |
I think the anticipated-E in that range is a little high; the C187 we did on the forum four years ago (cofactor of 2^956+1) used an E=2.991e-14 polynomial successfully. Are you doing the polynomial selection on CPU or GPU, and do you happen to have the timings and the raw relation counts for the stage-1 pass on those ranges: which stage1_norm did you use?
|
[QUOTE=fivemack;393695]I think the anticipated-E in that range is a little high; the C187 we did on the forum four years ago (cofactor of 2^956+1) used an E=2.991e-14 polynomial successfully. Are you doing the polynomial selection on CPU or GPU, and do you happen to have the timings and the raw relation counts for the stage-1 pass on those ranges: which stage1_norm did you use?[/QUOTE]
GPU I did this a while back and recently retrieved the log file to my laptop. It appears I used (default?) of 1.2e28, for stage1_norm, on the early run but the later run was changed to 2.0e28. |
That seems to me an extremely high stage1_norm value: I wonder whether you're ending up with an unreasonable number of things to filter down at stage two. I'm using stage1_norm=1e27 for my 114!+1 C187 at the moment, which gets a few million hits per range of a million C5 at a rate of a day or so GTX580-time per range.
|
OK, that makes sense. I remember when using a GPU the stage1_norm should be changed by an order of magnitude — but I couldn’t remember which way.
I think the first range took nearly a day (on a GTX 460), the second range was quicker, and that’s why I doubled the size of the last range — back to almost a day. When I get a chance I’ll run future ranges with 1e27. Thanks for your help. |
[QUOTE=RichD;393693]It seems to get worse as the lead coefficient increases. I wonder if it would be better to search below 1M?[/QUOTE]
Finally getting something I can work with. I searched in the 700-800K range and found this one. [CODE]R0: -833190005691277922377915047229543598 R1: 178575398638879069 A0: -3818155834164528069112483611011796776579442825 A1: -46509162420627906168286164471544245931 A2: 259512104283233128379348283014 A3: -20240316266268900809140 A4: 436787942678052 A5: 789480 skew 111747742.36, size 3.373e-18, alpha -8.072, combined = 3.638e-14 rroots = 3[/CODE] Next will be the 800s when I have another free time slot. |
Nothing better to report. The best in each range are listed.
[CODE]800-900K skew 106635723.26, size 3.035e-18, alpha -7.789, combined = 3.424e-14 rroots = 3 600-700K skew 261595249.51, size 2.554e-18, alpha -8.178, combined = 3.038e-14 rroots = 3[/CODE] |
Is it worth posting this in the polynomial request thread? It would be nice to get this sieving at some point soon as the M991 job is tailing off.
|
I'll do some poly searching on it, so really it's just wombatman of the regulars who'd be likely to see it there; of course, may as well post there anyway. I'll start at 3.6 million.
I'll be interested to explore sieve timings for 15e/33 bit vs 16e/32 bit vs 16e/33 bit for this one. I wonder if 15e/34 bit might be usable- how difficult would it be to apply the 16e patches that opened up 34/35 bit sieving to the 15e siever? I suppose that's folly for a shared project since it doubles the data uploads, but I'm curious. |
Nothing better is found.
[CODE]900-1000K skew 75780745.63, size 2.645e-18, alpha -6.832, combined = 3.103e-14 rroots = 3 500-600K skew 337815414.30, size 2.791e-18, alpha -8.832, combined = 3.216e-14 rroots = 3 400-500K skew 126028353.96, size 3.167e-18, alpha -7.133, combined = 3.516e-14 rroots = 5[/CODE] |
Rich-
You should not discard that 3.51 poly from the 400-500k range. Score is only accurate as a predictor of sieve speed within 5-7%, so any poly within 10% of the score of your best could actually sieve best. I'm doing some test-sieving now with the best scoring poly you've found so far, but you should post (or test) the 3.51 also. |
[QUOTE=VBCurtis;396971]I'll do some poly searching on it, so really it's just wombatman of the regulars who'd be likely to see it there; of course, may as well post there anyway. I'll start at 3.6 million.
I'll be interested to explore sieve timings for 15e/33 bit vs 16e/32 bit vs 16e/33 bit for this one. I wonder if 15e/34 bit might be usable- how difficult would it be to apply the 16e patches that opened up 34/35 bit sieving to the 15e siever? I suppose that's folly for a shared project since it doubles the data uploads, but I'm curious.[/QUOTE] Assuming the binaries were compiled from the same source the 15/16 doesn't matter for >33 bit sieving. All that is needed as a modification to the source is commenting out the restriction. Another thing I hope to try is doing some sieving at very low special q with the f variant of the siever. I want to test the duplication level. Relations can be found very quickly at small q as long as you can sieve below the factorbase bound. |
| All times are UTC. The time now is 23:08. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.