![]() |
![]() |
#23 |
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
17·251 Posts |
![]()
For the curious, like myself, here is the breakdown of relations per million: (where "x: y" means y relations where x <= q < x + 1000000)
Code:
0: 3136 1000000: 7 6000000: 1662129 7000000: 1649699 9000000: 3336830 10000000: 1680169 11000000: 1664133 12000000: 1667145 13000000: 1648949 14000000: 1777330 15000000: 1606183 16000000: 1566866 17000000: 1530895 18000000: 1494802 19000000: 1477421 20000000: 1448979 21000000: 1432483 22000000: 1387126 23000000: 1368016 24000000: 1337697 25000000: 1323433 Unknown: 121731 total: 31185159 I graphed these results (attached, with 8-9 excluded and 9-10 cut in half), to get an idea of the relation yield per q. Besides the outliers of 8-9 having none and 9-10 having twice the normal amount (methinks EdH, or possibly I, made a mistake in running jobs or reporting results), and 14-15 having slightly more than normal, it follows a consistent, nearly-linear pattern of the yield dropping as q increases. Could someone remind me why it was recommended that we start at 7M instead of somewhere lower? IIRC (and if I didn't have other factors confusing the issue, such as CPU sharing), when I had nearly finished up to 26M, I started doing from 6M to 7M and saw greatly improved rels/second reported (roughly 0.2 sec/rel to 0.12 sec/rel), so it would seem to me that sieving the lower end more would be better. |
![]() |
![]() |
![]() |
#24 |
"Ed Hall"
Dec 2009
Adirondack Mtns
1101110011002 Posts |
![]()
It is quite possible I introduced anomalies. I had about a dozen cores doing small chunks and ended up with 68 intermediate files. However, wouldn't a range being done twice (or, reported twice) provide a huge ratio of duplicates?
|
![]() |
![]() |
![]() |
#25 |
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
10AB16 Posts |
![]()
Yes. I don't know much about lattice sieving, but I'd guess that ~1.6M of the ~6.6M hash collisions eliminated as hash collisions in the first filtering were from this range being duplicated (it might help to compare this to a similar test and see if it had ~5.0M hash collisions instead of ~6.6M). FYI: in the 9M-10M range, 1007666 rels were in rels1, 666901 were in rels2, and 1662263 were in rels3 (none were in the remaining files).
Last fiddled with by Mini-Geek on 2012-02-07 at 02:58 |
![]() |
![]() |
![]() |
#26 | |
May 2008
21078 Posts |
![]() Quote:
Code:
total yield: 1475, q=7001003 (0.06635 sec/rel) total yield: 1448, q=9001001 (0.06850 sec/rel) total yield: 1431, q=11001007 (0.07258 sec/rel) total yield: 1949, q=13001029 (0.06811 sec/rel) total yield: 1498, q=15001001 (0.07250 sec/rel) total yield: 1253, q=17001007 (0.07443 sec/rel) total yield: 1148, q=19001011 (0.07987 sec/rel) total yield: 1490, q=21001021 (0.08335 sec/rel) total yield: 1281, q=23001007 (0.07738 sec/rel) total yield: 1617, q=25001029 (0.08253 sec/rel) total yield: 987, q=27001003 (0.08689 sec/rel) Re: the efficiency of sieving smaller Q... You must also consider that you will encounter a greater rate of duplication overall when you start sieving at smaller Q, and this will reduce the speed gain. I can provide real data to show this, but you can test it for yourself with the data you have now. Last fiddled with by jrk on 2012-02-07 at 12:19 |
|
![]() |
![]() |
![]() |
#27 | ||
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
17×251 Posts |
![]() Quote:
Quote:
|
||
![]() |
![]() |
![]() |
#28 | |
"Ed Hall"
Dec 2009
Adirondack Mtns
22×883 Posts |
![]() Quote:
![]() Thanks for post-processing. |
|
![]() |
![]() |
![]() |
#29 |
May 2008
3·5·73 Posts |
![]()
But, if you want to spend some time on it, you may be able to find a better range of specialQ that will produce factors quicker. I just did not spend much time optimizing the parameters or specialQ range apart from checking that the relation yield was likely to be sufficient.
Last fiddled with by jrk on 2012-02-07 at 20:20 |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Team sieve #24: c155 from 4788:2618 | schickel | Aliquot Sequences | 26 | 2011-02-24 23:19 |
Team sieve #23: c172 from 4788:i2617 | schickel | Aliquot Sequences | 64 | 2011-02-19 02:28 |
Team sieve #21: c162 from 4788:2602 | jrk | Aliquot Sequences | 31 | 2010-12-30 21:33 |
Team sieve #20: c170 from 4788:2549 | schickel | Aliquot Sequences | 153 | 2010-11-09 07:39 |
Team sieve #5: c140 from 4788:2407 | 10metreh | Aliquot Sequences | 77 | 2009-05-27 20:39 |