View Single Post
Old 2012-02-07, 00:20   #23
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

17·251 Posts
Default

For the curious, like myself, here is the breakdown of relations per million: (where "x: y" means y relations where x <= q < x + 1000000)
Code:
       0: 3136
 1000000: 7
 6000000: 1662129
 7000000: 1649699
 9000000: 3336830
10000000: 1680169
11000000: 1664133
12000000: 1667145
13000000: 1648949
14000000: 1777330
15000000: 1606183
16000000: 1566866
17000000: 1530895
18000000: 1494802
19000000: 1477421
20000000: 1448979
21000000: 1432483
22000000: 1387126
23000000: 1368016
24000000: 1337697
25000000: 1323433
 Unknown: 121731
   total: 31185159
(I think the ones from 0-2M are probably "free relations", and "Unknown" are probably incomplete lines)
I graphed these results (attached, with 8-9 excluded and 9-10 cut in half), to get an idea of the relation yield per q. Besides the outliers of 8-9 having none and 9-10 having twice the normal amount (methinks EdH, or possibly I, made a mistake in running jobs or reporting results), and 14-15 having slightly more than normal, it follows a consistent, nearly-linear pattern of the yield dropping as q increases.
Could someone remind me why it was recommended that we start at 7M instead of somewhere lower? IIRC (and if I didn't have other factors confusing the issue, such as CPU sharing), when I had nearly finished up to 26M, I started doing from 6M to 7M and saw greatly improved rels/second reported (roughly 0.2 sec/rel to 0.12 sec/rel), so it would seem to me that sieving the lower end more would be better.
Attached Thumbnails
Click image for larger version

Name:	rels per q.png
Views:	63
Size:	56.2 KB
ID:	7630  
Mini-Geek is offline   Reply With Quote