![]() |
You are making one logical mistake, you forget about the lower ranges and growth of GIMPS.
Any one completing an assignment in a range below 50M needs a new assignment from the 60-69M range so you need to look at the whole project, not only the 60-69M range. New members also gets assignments without clearing any old ones. Same about people switching from DC and other work to LL. There were 2127 LL assignment cleared between 30M and 100M, only 1214 were in the 60M-70M range. This is still lacking growth and switched assignments but it hints about a decent lead again with 2359 factors LLTF to 74 in the same period, about a 10% lead. /Göran |
[QUOTE=Axelsson;362079]This is still lacking growth and switched assignments but it hints about a decent lead again with 2359 factors LLTF to 74 in the same period, about a 10% lead.[/QUOTE]
Göran has it (mostly) correct. I prefer to look at the monthly deltas, rather than weekly, because some of our participants submit their results infrequently. So, for the last month (which, as most know, was rather unusual because of Jerry's "blow-out")... 9,816 LL candidates were either LLed once, or factored. 10,067 LL candidates were TFed to at least 74 "bits". Before the "incident" we were at about a 10% faster rate in TFing to at least 74 or above vs. LL completion. It's too early to tell what we're at now, but it's probably back to at least 8%. Lastly, keep in mind that LLing gets more "expensive" the higher the candidate, while TFing gets "cheaper". |
As much as I like to disagree with Daveiddy whenever possible, I feel like at the minimum I should pose this question:
Would it be beneficial to (at least temporarily) take the 64M-70M exponents only to 73, and re-purpose half the power to taking 50M exponents to 71 or 72? It seems like instead of funneling all of our energy to push the wave forward, perhaps we should ensure that the factor-filled land behind us has been thoroughly scoured of low factors... Eventually, we could get rid of "DCTF" if we do everything in one fell swoop from now on, but maybe that would require stopping the wave to catch our breath? Edit: Oh dear, he'll be the first one to see this post. What hath I wrought? |
I don't know that doing DCTF and TF all in one go is exactly reasonable. That would require us to eventually reach the point where TF in 65M would be done up to 75, which is just not reasonable. The point Davieddy tries to make is that even 74 isn't reasonable.
To be honest I've never given any thought to DCTF because we're so far ahead of that wave. It DOES need to get done eventually, but because LL and TF are so neck and neck, we can't really afford to move anything over. The work I can contribute to the TF front is needed in a week, but the work I can contribute to the DCTF is only needed in maybe six months or a year or whatever (I truly do not know). On the other hand, my feeling is that there is a serious lack of participation in the DC department. I switched half of my CPU to DC a while back, and at something like 15 DC for 500 GHz-Days and 50 LL for 6500 GHz-Days, my DC and LL ranks were the same. For every 3 LL that get done, a single DC gets done, apparently. [I went and checked mersenne.info and in the last month, there were 5,300 new "One LL" exponents and 3,000 new "Two LL" exponents, so 1 in 3 is pretty close (3 in 8.3 is more exact)] If Primenet[SUP]R[/SUP] Makes Sense[SUP]TM[/SUP] was to emphasize DC's a bit more (say 15% of WMS assignments are DC) to try to bring them up to speed, we could re-purpose some of our TF to DCTF (say 10%) while increasing the lead in the TF front (the remaining 5%). I think re-balancing the LL/DC emphasis is a better option than changing the factoring limits. |
[QUOTE=c10ck3r;362135]Would it be beneficial to (at least temporarily) take the 64M-70M exponents only to 73, and re-purpose half the power to taking 50M exponents to 71 or 72?[/QUOTE]
IMO (and many others), no. Because it takes longer to LL the higher the candidate, it Makes (more) Sense [SUP](TM)[/SUP] to eliminate the higher candidates by TFing first. And because we're only slightly ahead of the LLing "wave", we should focus on what we're currently doing, rather than "dropping back down" into the 50's. It is only because we have a slight lead that we're continuing to bring in 62M's TFed to "only" 73 to bring up to 74. Edit: Please don't forget [URL="https://www.gpu72.com/reports/current_level/"]this report[/URL]. No "50's" are at less than 72. |
[QUOTE=chalsall;362139]IMO (and many others), no.[/QUOTE]
This doesn't pass a simple common sense test. Pose the question this way: Would GIMPS be better off if your GPU took two 58M exponents from 72 to 73 or one 64M exponent from 73 to 74? 1) The 2 58M TFs take only a little more time than the 1 64M TF. 2) The 2 58M TFs have a little more than twice the chance of finding a factor. I claim the extra iterations and higher FFT length of a saved 64M LL test does not compensate for the nearly double chance of saving one 58M LL test. |
[QUOTE=Prime95;362143]I claim the extra iterations and higher FFT length of a saved 64M LL test does not compensate for the nearly double chance of saving one 58M LL test.[/QUOTE]
What are your orders, Admiral? I'm serious. As you know, I'm more than able to bring in lower candidates to take to 73. Happy to facilitate if you think this is advisable. |
Geroge has a very good point. Whatever your threshold is for taking everything to 74 - and at the moment it seems that anything over 60M is taken to 74, the threshold for 73 should be ~14M behind. The way things are currently, there is a jump of two bits at 60M.
BTW, how many exponents from 50-60M are made available everyday anyway? I'd vote for capturing those and taking them to 73 assuming we have sufficient TF cycles available. And if we don't I would rather we throw back 60-64Ms at 73. |
[QUOTE=garo;362150]I'd vote for capturing those and taking them to 73 assuming we have sufficient TF cycles available. And if we don't I would rather we throw back 60-64Ms at 73.[/QUOTE]
[In the voice of Data]I copy you commander.[/Data] I don't actually think that makes the most sense, but let's see how that works out; we might actually have the margin to pull it off (but only just).... |
[QUOTE=Prime95;362143]This doesn't pass a simple common sense test. Pose the question this way: Would GIMPS be better off if your GPU took two 58M exponents from 72 to 73 or one 64M exponent from 73 to 74?
1) The 2 58M TFs take only a little more time than the 1 64M TF. 2) The 2 58M TFs have a little more than twice the chance of finding a factor.[/QUOTE] Ah. By 50M I thought Clocker was referring to DC territory, but the upper range of 50M is actually still LL. I'm working on two of those as we speak... |
1 Attachment(s)
I have a point for George, and one against him. I mean, not really, in fact I have a point for doing TF to 73 only, but a point against doing TF to higher bits in the 50M range.
The point "for him" would go like "the TF to 64 is not really productive". This bitlevel is at the limit, you get about the same "clearance rate" doing TF to 64 in the current LL range, or doing LL**. This I can say for sure, after I have TF-ing from 71/72 to 74 in the last week (10 days), with about 900GHzDays/day. I only found 9 factors, and Chris and Kracker both found 10 factors each, doing P-1 with about the same rate (edit: both together, not each) I could do P-1 if I would use the cards to do P-1. A little more, in fact, but also two times more factors. And my factors are most of them from below 74 bits, in fact I think there are only 2 or 3 which are "pure" 74 bits factors. I could clear 3 exponents by LL/DC in this time (6 tests on 2 cards take ~12 days in fact, with "normal", factory overclock, without tricks). So, upon your luck, you can get a little bit better doing TF to 74, or a little bit worse. The best way for the current range (and this I am advocating since long time, and I can prove by calculus, and by empirical data from my work and from Chris' tables) is to [U][B]TF to 73[/B][/U], then [B][U]do P-1, lots of P-1[/U][/B]. Here I come in the sentiment with RDS, which also recommended P-1 against TF (but he is irrationally doing this at every bit level :P). Practically, the fastest way one can eliminate exponents now, at the current LL front, and since cudaPM1 is available, is to do P-1. The point "against" stands on the fact that George's calculus does not take consideration of the fact that the DC range had a lot of P-1 done already, so the chances of finding factors is much lower there. Assuming you spend x time to do one 64M to 74 bits, and you have y chance to find a factor, then: to TF one 58M to 73, you will need to pass through the half of the quantity of factor candidates, but you crawl through them with a 2*58M step instead of 2*64M step, therefore to TF two of them, you will need 64/58 x time, about 10% more time, but your chance to find a factor is not 2y, it is only about 1.3y, due to the P-1 done in the range. So, you spend about 10% more time, and have about 30% more chance to clear one exponent. Whatever path is chosen, we will follow with our GPUs. By the way, something is wrong with this Oliver guy... We put together almost all our P-1 factoring power (Chris, Kracker and me), so now we together are faster than any other participant taken alone, by far (we are only doing this since 10 days, and look to the numbers in the rightmost column), and Oliver is still not in sight... Neither a ">999" either! (which would mean that we are still faster, but the deadline is far away). Nothing! :shock: Something is wrong with this guy, someone stop him! :razz: [ATTACH]10575[/ATTACH] ------------------- ** with fast DP cards like GTX580's and Titans (lower cards like 560, 660, etc, still can do better by doing TF) |
[URL="https://www.gpu72.com/reports/workers/p-1/month/"]He has a lot of power...[/URL]
Hmm... that chart doesn't look exactly right for some reason. |
[QUOTE=chalsall;362152][In the voice of Data]I copy you commander.[/Data]
I don't actually think that makes the most sense, but let's see how that works out; we might actually have the margin to pull it off (but only just)....[/QUOTE] I added 500GHzDays/day to my battery last night (see [URL="http://www.mersenneforum.org/showthread.php?p=362101"]this post[/URL], after I was able to install .Net 4, and run CCC and Misfit with it, I could set the proper clocks for the card, ran "-st2" and everything was fine, so I am crunching right now with 1400G/day - the "top difference" was 3900G, now is going down :razz:. Technically, I am in the 100%-ile, so you can remove your CPU's now. I will continue to TF, and if you want me to "save" the result files and send them to you instead of reporting it, please PM). |
[QUOTE=chalsall;362144]What are your orders, Admiral?[/QUOTE]
I don't give orders! I'm happy to help work out our best strategy. It seems that at any point in time we want to TF the exponent that gives the biggest payoff. So let's define the payoff function: profit = chance-of-finding-a-factor * LL cost / TF-cost TF-cost is proportional to 2^TF-bit-level / exponent chance-of-factor is 1/TF-bit-level LL-cost is exponent * per-iteration-time I think we can (mostly) ignore P-1 since we will end up doing P-1 on all LL candidates. Putting it together we want to maximize exponent^2 * per-iteration-time / 2^bit-level * TF-bit-level. Back to our 58M / 64M example. We need a table from James detailing the per-iteration times for the most common GPUs in play. I guess that would be a 580, but the actual choice probably makes little difference. Say the per-iteration time for 58M and 64M is 10ms and 12ms respectively. Then GPU72 should hand out 58M since 58M^2*10/2^73*73 > 64M^2*12/2^74*74. One wrinkle: You don't want the GPU72 server to get too far ahead of the LL wavefront. If you added the 70M to 71M range to the server today, we would immediately start factoring that range to 2^72 as those exponents would have the maximum payoff. I'm guessing the server should be adding the same number of exponents to the pool each day as the Primenet server hands out for LL testing. |
Two small observations again:
Subjective one: The "profit" name is already biased :P Your function formula already suggests that is more profitable to do LL (numerator) than TF (denominator), without any calculus, hehe. Objective one: for LL, the time is "1x" when DC, and "2x" when LL. Also, ignoring P-1 is not right, because in the DC range, P-1 is already done, and it does not have to be done again. The 10/12 timing is "off" from the real values, but it is proportionally right (about) and we are only interested in proportion here. I only talk for GTX580 (the 7970/7990 are much better doing TF for another 2 bits more :razz:, this does not mean that they have to do it - it might not be optimally in context: other people will be able to clear the exponents faster). Generally, we all agree here, that going to 74 is a bit "costly", expecially since P-1 became a cuda-enabled stuff. One will be much better in clear a range by doing P-1 in it, in any situations. My argument is only "74 against 73 bits in the actual LL front range". For high DC range (55M+), we both agree that is is more profitable to raise them to 73, doing two of them in about the same time it would take to do a 64M+ to 74. What we don't agree is the numbers for the "amount of profit", you say two times (i.e. ~100%more), I say 30% more, due to the P-1 (already) done into the range. Of course, the "profit" gets higher as "55M" gets higher (to 58M) and 64M gets lower, it may get almost doubled in the middle, but it will never be double, even if you have the exponents in the same range, because those counted as "DC" had already been "sieved" to some extent by the P-1, and those counted as "LL" are fresh, unsieved. Edit: the "wrinkle" is common sense, we all agree with that. Exhausting one type of work "because we can", it would bankrupt many other participants which can [U]only[/U] do that type of work, and it is not nice, neither wanted. |
The wrinkle can be accounted for easily enough by multiplying the result by a sigmoid function, with a percent based on the distance from the wave front, where 100% would be the next available candidate in the wavefront or less (because numbers can appear behind the wavefront as they are returned incomplete) and 0% would be some distance out in front of the wave front, say 2 months worth of work.
|
[QUOTE=Prime95;362178]I don't give orders! I'm happy to help work out our best strategy.[/QUOTE]
That was a joke, George. :smile: |
And one final note. Never give out a TF assignment for a bit depth greater than recommended by James' TF vs. LL break-even chart.
Thus, with unlimited GPU firepower we'd factor everything up to James' limits. With less than unlimited firepower, we'd always do the TF with the biggest payoff. Heck, even davieddy might agree with such a strategy. In practice, will implementing such a strategy make much of an impact? Are there significant numbers of exponents in the 55M-64M range factored to only 2^72 being released by PrimeNet each night? |
[QUOTE=chalsall;362144]What are your orders, Admiral?
I'm serious.[/QUOTE] [QUOTE=chalsall;362196]That was a joke, George. :smile:[/QUOTE] :ermm: |
[QUOTE=TheMawn;362221]:ermm:[/QUOTE]
:jokedrum: |
[QUOTE=chalsall;362223]:jokedrum:[/QUOTE]
I'll see your :jokedrum: and raise you one :weirdo: |
[QUOTE=TheMawn;362226]I'll see your :jokedrum: and raise you one :weirdo:[/QUOTE]
I'm comfortable with that. "Normal is not something to aspire to, it is something to get away from." -- Jodie Foster. |
[QUOTE=Prime95;362208]And one final note. Never give out a TF assignment for a bit depth greater than recommended by James' TF vs. LL break-even chart.[/QUOTE]
The [URL="https://www.gpu72.com/account/getassignments/lltf/"]web-interfaces[/URL] for TFing assignments will pop-up a warning if the user "pledges" to anything higher than 75. But, if the user "says" they are sure they want to go higher, the system will give the assignment. As always, the participant should be allowed to do anything they want to so long as it doesn't harm the project or others, even if it Doesn't Make Sense [SUP](TM)[/SUP]. [QUOTE=Prime95;362208]In practice, will implementing such a strategy make much of an impact? Are there significant numbers of exponents in the 55M-64M range factored to only 2^72 being released by PrimeNet each night?[/QUOTE] A [URL="https://www.gpu72.com/reports/current_level/"]bit less than 11,000 LL candidates[/URL] below 57M are only TFed to 72 (read: most of them). With the average "abandonment rate" of ~ 80%, we could process most of them over time. Although, I imagine this rate will be lower because of the "preferred" status of the very lowest range. I've started the configuration changes needed for this. However, unfortunately, my main monitor died on me this morning, so I'm a bit disabled at the moment (how do people work with only one monitor these days?!?!?). I should have a replacement tonight. |
[QUOTE=chalsall;362289]I've started the configuration changes needed for this. However, unfortunately, my main monitor died on me this morning, so I'm a bit disabled at the moment (how do people work with only one monitor these days?!?!?). I should have a replacement tonight.[/QUOTE]
Well I personally only use the one monitor, but it has 27 juicy inches of 1080p goodness. Still I find myself badly wanting a second. |
[QUOTE=TheMawn;362323]Well I personally only use the one monitor, but it has 27 juicy inches of 1080p goodness. Still I find myself badly wanting a second.[/QUOTE]
My secondary monitor is 1920 by 1200 pixels. As any girl will tell you, it's not the inches that count, but how you use them.... :wink: (And, yes. Try working with a second (or more) monitor. You'll find yourself much more productive.) |
[QUOTE=TheMawn;362074]
[IMG]http://mersenneforum.org/attachment.php?attachmentid=10572&d=1387089113[/IMG] I never know if I'm reading these things correctly and if I am reading enough of them to get the full picture. Let's take a crack at it. [/QUOTE]I read these tables daily AFTER NOON and even I find it tricky. You'll see why. The pages you cite were before the noon update[QUOTE] During the period of December 8 to December 15,[/QUOTE]And so the period was actually Dec 7th to Dec 14th [QUOTE] in the 60M range, 2359 exponents were taken from something to 74.[/QUOTE]In particular, 322 of which were <63M[QUOTE]1017 LL's were completed in the 60M range.[/QUOTE]between[B] Dec 8th and Dec 14th: 6 days. [/B] Now throw in the 50M range and any DCs (which are subtracted from first time LLs total) and you get ~1800 LL completions in 6 days, as we always do.[QUOTE]From this it would seem factoring is massively ahead of LL. [/QUOTE]Well I reckon each day we complete 300 LLs. We want to at least replace these with virgin assignments. (2359 - 322)/7 = 291 I told you it was tricky. ____________ @ Mods: I am grateful to be allowed back, but if you do not allow me to edit my posts, I won't post a painstaking reply like this again. I think you have learned what to expect instead: :davieddy:[SUP]MT[/SUP] |
[QUOTE=davieddy;362343]I read these tables daily AFTER NOON and even I find it tricky.[/QUOTE]
David... If you are so knowledgeable, why don't you build your own tables? |
[QUOTE=chalsall;362344]David...
If you are so knowledgeable, why don't you build your own tables?[/QUOTE] Just fix yours. I've been teling you incessantly that all you need to do is cut out your midnight fiddling. |
[QUOTE=davieddy;362345]Just fix yours.[/QUOTE]
I've been accused of not taking direction well.... :smile: |
[QUOTE=chalsall;362348]I've been accused of not taking direction well.... :smile:[/QUOTE]
I can debug, but I prefer to get the code right first time. If a problem takes two pages of math to solve, you soon learn that it's better to avoid typos than search for it when the answer comes out wrong. |
This page
[URL]http://www.mersenne.info/exponent_status_tabular_delta/1/0/[/URL] always makes most instructive reading in the morning. |
1 Attachment(s)
Thanks, David. That's actually entirely the answer I was looking for. Very little TF going on in the 50M, as we *should* be finished with that, although there is talk about raising the upper 50's to 73, if I understood that correctly? Not sure how much I like that...
Attempt number 2: See attached. This is for a one-month period. [LIST][*]Negligible TF in 50M[*]10,000 exponents (~> 300 per day) brought to 74 in 60M. [*]Of these, 9,000 were from 71 and 3,000 were from 73. [B]Where are the remaining 2,000 exponents?[/B][*]3,500 exponents have had LL done in the 50M. Negligible DC.[*]5,200 exponents have had LL done in the 60M. Negligible DC.[/LIST][LIST][*]9,000 TF from 71 to 74.[*]8,200 exponents with LL.[/LIST] It seems to me that in the last month, the 71-to-74 effort alone has been enough to keep up with LL. There's an extra few thousand 71-to-73. By the looks of things, we can afford to spend a bit more time in the 50M, but maybe it's in our best interest to increase the TF lead at the wavefront. |
[QUOTE=TheMawn;362362][LIST][*]Of these, 9,000 were from 71 and 3,000 were from 73. [B]Where are the remaining 2,000 exponents?[/B][/LIST][/QUOTE]
Obviously, about a thousand went only to 72 (from 71) and the other thousand were factored, therefore disappeared from the table, see last column. This is written following your rounding system, where 600 is rounded to 1000, but if you sum the columns, there is no exponent disappearing. Everything is fine. And yes, we keep up with the wave... :razz: |
[QUOTE=TheMawn;362362]
Attempt number 2: See attached. This is for a one-month period.[/QUOTE]It is from Nov 17th to Dec 17th for TF (30 days) Nov [B]18th [/B]to Dec 17th for LL [B](29 days)[/B][quote] 10,000 exponents (~> 300 per day) brought to 74 in 60M Of these, 9,000 were from 71 and 3,000 were from 73. [B]Where are the remaining 2,000 exponents?[/B][/quote]!000 stopped off at 72 bits and 1000 were factored[quote]3,500 exponents have had LL done in the 50M. Negligible DC. 5,200 exponents have had LL done in the 60M. Negligible DC.[/quote]Acurately 8746 in 29 days[quote]9,000 TF from 71 to 74.[/quote]7000 from 71. 3000 from 73 of which 1000 were <63M (recycled i.e. not virgins)[quote]8,200 exponents with LL.[/quote]87000 in 29 days (~> 300.day)[quote]It seems to me that in the last month, the 71-to-74 effort alone has been enough to keep up with LL. There's an extra few thousand 71-to-73. By the looks of things, we can afford to spend a bit more time in the 50M, but maybe it's in our best interest to increase the TF lead at the wavefront.[/quote] I disagree:smile: |
[QUOTE=TheMawn;362362]maybe it's in our best interest to increase the TF lead at the wavefront.[/QUOTE]
What lead? What wavefront? But yes, it is certainly in our best interests. Especially for those flying pigs (74 bits for everything >63M). Won't life get interesting when we are discussing 75 bits at 75M? D |
David you hit shift so you wrote !000, you go off by an order of magnitude writing 87000, and you could spell accurately correctly. is that from typos you couldn't look over ? or someone else ?
|
Ah! Thanks for the pointers!
First I wrote 8,200 instead of the 8,700 I meant. Oops! Second, I completely did not understand the unfactored column, though it does make sense now. Third, I was taking the completely wrong approach to figuring out the missing factors. I went and summed up all the "totals" and got a huge negative number because you're supposed to subtract -945 (so add 945) to get 0. I started with the big numbers, too, so I was around -2000 with only a few +1 and +30 to go, so I never finished (or I would have noticed having -1890 at the end being -945*2 and seen my mistake). Hence my missing $2000. Same as the 25$ hotel room problem :smile: On this note, would it be difficult to change that column to "factored" from "unfactored" (and switch the sign) or am I the only one who had any confusion there? Is there a way to find which bit level the factors found were actually in? I could estimate that there were 333 in each range (72, 73, 74). If we counted "factors found" in "factored to" we would have: +1,000 in 72 -2,500 in 73 +10,500 in 74 (still -9000 in 71) Still looks to me like we're ahead of LL. |
[QUOTE=TheMawn;362430]
+1,000 in 72 -2,500 in 73 +10,500 in 74 (still -9000 in 71) Still looks to me like we're ahead of LL.[/QUOTE] To add a bit to this, the amount of work put in is equal to the work required to bring 1,000 from 71 to 72, minus the work to take 2,500 from 71 to 73, plus the work to bring 10,500 from 71 to 74. This is 9,000 71 to 72, 8,000 72 to 73, 10,500 73 to 74. With each step behind twice the next, this is like 2,250 + 4,000 + 10,500 = 16,750 of "73 to 74 worth of work". This amount is also equivalent to ~9,570 of straight up 71 to 74. If we wanted to think worst-case, and assume all the factors found were 2[SUP]72.001[/SUP] then we get +700 72 -2,800 73 +10,200 74 Following the same procedure, this comes to 13,650 of 73 to 74 or 7,800 of 71 to 74. We must therefore have the firepower to bring AT THE VERY LEAST 7,800 exponents from 71 to 74, ASSUMING no work was put into finding the factors at all. Assuming factors are evenly distributed and that the whole bit range was factored (vs stopping when a factor is found) then there is enough for nearly 9,600. |
[QUOTE=TheMawn;362430]
On this note, would it be difficult to change that column to "factored" from "unfactored" (and switch the sign) or am I the only one who had any confusion there? [/QUOTE] Well, I think it kind of makes sense as it goes. It´s in line with the other columns, where we have "-" signs indicating that a particular bitlevel as "lost" a certain amount of exponents. Similarly, the number of "Unfactored" exponents in a certain range has gone down by x, so we have "[COLOR="Red"]-x[/COLOR]" for that range. |
Stop Press
Whoopee!
In response to the daily threat of running out of available candidates, Chris has released 300 60M and 61M candidates which (PLEASE GOD) will have been TFed only to 73. His usual strategy is to slow down the assignment rate by bagging everything up to 68M, leaving only unappetizing fare during the peak demand. What a ****. |
[QUOTE=davieddy;362478]What a ****.[/QUOTE]
At your service... :smile: You've never managed a real-time system you're not fully in control of, have you David? |
[QUOTE=chalsall;362479]You've never managed a real-time system you're not fully in control of, have you David?[/QUOTE]
That sounds like a big compliment:smile: |
[QUOTE=davieddy;362481]That sounds like a big compliment:smile:[/QUOTE]
Wasn't meant to be... 1. You haven't managed a business. 2. You haven't managed anything dealing with the Internet. 3. You haven't played in the stock markets. Are you familiar with the term "independent actors"? |
[QUOTE=chalsall;362484]Wasn't meant to be...
1. You haven't managed a business. 2. You haven't managed anything dealing with the Internet. 3. You haven't played in the stock markets. Are you familiar with the term "independent actors"?[/QUOTE] NMFP[SUP]MT[/SUP] |
[QUOTE=davieddy;362487]NMFP[SUP]MT[/SUP][/QUOTE]
No, it's Not [your] [Fscking] Problem. But perhaps you could let those of us who have decades of experience doing this kind of thing do this kind of thing.... |
[QUOTE=chalsall;362488]No, it's Not [your] [Fscking] Problem.
But perhaps you could let those of us who have decades of experience doing this kind of thing do this kind of thing....[/QUOTE] I could actually do without the theatrics, I think, Malcolm. Nichola Murray |
[QUOTE=chalsall;362488]
But perhaps you could let those of us who have decades of experience doing this kind of thing do this kind of thing....[/QUOTE] What kind of thing? Butting into a project about which you are clueless? |
Not clueless, that's for sure. I'm sure chalsall thinks the same of you. Frankly, I think both of you have valid points.
|
Challsall,
Can I get the current reccomendations for what ranges we're doing to what bitlevel? I don't use the 'What makes most sense' option, so I have to fine tune my requests a bit. I think currently I have 60M-100M, lowest exponent to 74, but if we're only doing 74 past 63M, I can set multiple configs so it will request... I don't know, 50M-63M to 73, and then 63M-74M to 74 or some such. |
[QUOTE=Aramis Wyler;362499]Challsall, Can I get the current reccomendations for what ranges we're doing to what bitlevel?[/QUOTE]
If you are using MISFIT, please choose option #9 ("Let GPU72 decide."). If you are using the GPU72 manual assignment forms, the defaults are What Makes Sense at that particular moment. Edit: I didn't actually answer you. We're currently taking anything available below 60M to 73, and anything above 62M to 74. |
[QUOTE=davieddy;362491]I could actually do without the theatrics, I think, Malcolm.
Nichola Murray[/QUOTE] Sigh. Please, David. How can you fail to see the irony of the above statement? |
[QUOTE=TheMawn;362323]Well I personally only use the one monitor, but it has 27 juicy inches of 1080p goodness. Still I find myself badly wanting a second.[/QUOTE]
I recently upgraded from dual 1080p to this [url=http://www8.hp.com/ca/en/products/monitors/product-detail.html?oid=5181723#!tab=features]2560x1440 beauty[/url] at home and at work. It's the first time I haven't felt the [i]need[/i] for more pixels, but more would be nice... |
[QUOTE=kladner;362502]Sigh.
Please, David. How can you fail to see the irony of the above statement?[/QUOTE] You of all people, Kieran, know perfectly well that any irony was intended. Perhaps you would skim through this thread, actually started by his holiness by abusing his moderation power for the fiftieth time, and see who has done most to keep it on track, and who has done most to turn it into the tiresome "Let us GPUto72 mob bully David ... we know he will be capable of returning it in spades". I sense that one of us is resorting to bullying in indesperate fear of being exposed as a fraud. David |
But irony in what direction, and to what end? I cannot speak to what the [STRIKE]gods and demigods[/STRIKE] supermods and mods do. I'm really not sure, though, that any bullying has been one-sided. I've seen quite a bit of pushing and shoving, and have seen support expressed for you.
I think that, to some extent, many people are not as passionate, nor as knowledgeable about this pastime of ours as you, and some others may be. I cannot argue, nor sometimes even follow the arguments, about appropriate TF levels. It would not really bother me to be taking more exponents to 73 instead of 74. That last bit-level takes a lot more time. But as long as the system is based on running to 74 above some cut-off point, I'm not going to run the lower levels and leave some other poor schmuck to run that last, more expensive level. I've done enough such cleanup work, and would not slough it onto someone else. |
The :censored: "argument" is not very hard to follow and it goes in circles like this:
[YOUTUBE]RkP_OGDCLY0[/YOUTUBE] It is funny to follow once. Not 24 times, for the Universe's sake, eh? It gets old, really old. |
[QUOTE=Batalov;362518]The :censored: "argument" is not very hard to follow and it goes in circles like this:
[Insert silly vid here.] It is funny to follow once. Not 24 times, for the Universe's sake, eh? It gets old, really old.[/QUOTE] Sheesh! The Sopranos with a laugh track. :cmd: |
The other way around.
Mr.Show (1995–1998) Sopranos (1999-2007) is Mr.Show without the laugh track. |
[QUOTE=kladner;362517]But irony in what direction, and to what end?[/QUOTE]
Irony for irony's sake, since Chris hasn't dared to engage in serious discussion. With the notable exception of Aramis, Foetus Boy seems more up to speed on this topic than anyone except me, and it seems that includes George. You will understand this mildy mathy point though: From the point of view of the LL assignment seeker, a 69M exponent TFed to 74 will take 30% longer to test than a 60M exponent TFed to 73. Furthermore it will be (60/69)*(74/73) = 0.88 times as likely to be prime. That is what I meant by "unappetizing fare", and this cannot do anything but slow the assignment rate. Remember that maximizing this rate is the [B]sole[/B] purpose of these heady bit levels and of this debate. D |
[QUOTE=Batalov;362522]The other way around.
Mr.Show (1995–1998) Sopranos (1999-2007) is Mr.Show without the laugh track.[/QUOTE] Point taken. Wow! I had no idea that The Sopranos ran that long. |
[QUOTE=Prime95;362208]And one final note. Never give out a TF assignment for a bit depth greater than recommended by James' TF vs. LL break-even chart.[/QUOTE]What James' stepped curve shows is that GPUs should be directed entirely to TF, and that the bit level should be determined by available firepower.
However James' steps occur when the exponent increases by 2[SUP]1/3[/SUP], and so do David's firepower-limited steps. Now ain't that as cute as it is unsurprising?[QUOTE]Thus, with unlimited GPU firepower we'd factor everything up to James' limits. With less than unlimited firepower, we'd [STRIKE]always do the TF with the biggest payoff[/STRIKE] factor everything up to David's limits. Heck, even davieddy might agree with such a strategy.[/QUOTE]With the proviso that[QUOTE]In practice, will implementing such a strategy make much of an impact? Are there significant numbers of exponents in the 55M-64M range factored to only 2^72 being released by PrimeNet each night?[/QUOTE]No. But if you and Chris wish to persist [STRIKE]tidying up[/STRIKE] mucking about behind the wavefront, the cost is raising my cut-off points. Heck, even garo might agree with such a strategy. :davieddy: |
Some people might like a formula for James' and my curves.
It is: bitlevel is proportional to (TF firepower/LL firepower) * exponent[SUP]3[/SUP] . END OF STORY. BTW do you think there are any GPUs out there which don't belong to GPUto72? |
[QUOTE=davieddy;362478]Whoopee!
In response to the daily threat of running out of available candidates, Chris has released 300 60M and 61M candidates which (PLEASE GOD) will have been TFed only to 73. His usual strategy is to slow down the assignment rate by bagging everything up to 68M, leaving only unappetizing fare during the peak demand. What a ****.[/QUOTE] Sure enough 1500 available LL assignments between 63M and 69M went AWOL between 1400 and 1500 just now. The crowd outside are clamouring for bread, Marie Antoinette. "Let them eat soixante-neuf M exponents." |
[QUOTE=davieddy;362508]I sense that one of us is resorting to bullying in indesperate[sic] fear of being exposed as a fraud.[/QUOTE]
I concur. |
[QUOTE=davieddy;362536]The crowd outside are clamouring for bread, Marie Antoinette.
"Let them eat soixante-neuf M exponents."[/QUOTE] 1. Everything being assigned for LL'ing has been TFed to at least 74, and P-1ed. 2. This work (read: LLing) needs to be done. 3. Over 50% of LL assignments won't get a single cycle done on it. 3.1. Over 80% of LL assignments will be "recycled" after it is abandoned. 4. Where, approximately (based on some hard math, please), is the next MP expected to be? In the 60s, or 70s? 5. We have the fire-power [URL="https://www.gpu72.com/reports/estimated_completion/primenet/"]to pull this (read: TFing to 74) off[/URL]! |
[QUOTE=chalsall;362541]1. Everything being assigned for LL'ing has been TFed to at least 74, and P-1ed.
2. This work (read: LLing) needs to be done. 3. Over 50% of LL assignments won't get a single cycle done on it. 3.1. Over 80% of LL assignments will be "recycled" after it is abandoned. 4. Where, approximately (based on some hard math, please), is the next MP expected to be? In the 60s, or 70s? 5. We have the fire-power [URL="https://www.gpu72.com/reports/estimated_completion/primenet/"]to pull this (read: TFing to 74) off[/URL]![/QUOTE] 1. Pigs still airborne (just). 2. and LLers like to do it in ascending order. 3. No because they want a LOWER one. 4. Poisson. ~50% chance of it being <80M. 5. Read this thread you created. SEND HER TO THE GUILLOTINE |
Actually, some of us want to do it in descending order, but we can't figure out how to count backwards from [TEX]$\omega$[/TEX].
|
[QUOTE=owftheevil;362556]Actually, some of us want to do it in descending order, but we can't figure out how to count backwards from [TEX]$\omega$[/TEX].[/QUOTE]
:missingteeth: |
[QUOTE=davieddy;362548]SEND HER TO THE GUILLOTINE[/QUOTE]
Please define "HER". |
Just when I thought I was out, they pull me back in. I left this debate weeks ago and here I am again. As soon as this gets the slightest bit stupid, I'm out again.
To be perfectly honest, I don't give two bits what order I do my LL in. Granted, it's kind of fun to be assigned a 57M that gets done in a ~ridiculously short time. I might be wrong, David, but my belief here is you're pretty much the only one who cares if they do a 68M and then a 62M instead of doing a 62M and then a 68M. I completely agree with Chris regarding the notion of "It will get done eventually" I think it's well established that we have the firepower to TF everything to 74 bits before LL. I think it's well established that nobody really cares what order the work gets done in. |
[QUOTE=chalsall;362566]Please define "HER".[/QUOTE]
He's given up on being Malcolm Tucker and he's being Marie Antoinette now. He'll have to cope with us not taking him very seriously. |
[QUOTE=TheMawn;362567]Just when I thought I was out, they pull me back in. I left this debate weeks ago and here I am again. As soon as this gets the slightest bit stupid, I'm out again.[/QUOTE]
That's actually quite funny. Thanks for that. :smile: [QUOTE=TheMawn;362567]I think it's well established that nobody really cares what order the work gets done in.[/QUOTE] Some might. Some might not. If you have a chance to buy a gift for yourself, "The Mighty Boosh" is surrealistically funny.... |
[QUOTE=chalsall;362566]Please define "HER".[/QUOTE]
Whoever is witholding bread from the hungry LLers. |
[QUOTE=TheMawn;362567]Just when I thought I was out, they pull me back in. I left this debate weeks ago and here I am again. As soon as this gets the slightest bit stupid, I'm out again.
To be perfectly honest, I don't give two bits what order I do my LL in. Granted, it's kind of fun to be assigned a 57M that gets done in a ~ridiculously short time. I might be wrong, David, but my belief here is you're pretty much the only one who cares if they do a 68M and then a 62M instead of doing a 62M and then a 68M. I completely agree with Chris regarding the notion of "It will get done eventually" I think it's well established that we have the firepower to TF everything to 74 bits before LL. I think it's well established that nobody really cares what order the work gets done in.[/QUOTE] [QUOTE=TheMawn;362568]He's given up on being Malcolm Tucker and he's being Marie Antoinette now. He'll have to cope with us not taking him very seriously.[/QUOTE] Reply deleted by Prime95 due to rudeness |
[QUOTE=davieddy;362536]Sure enough 1500 available LL assignments between 63M and 69M went AWOL between 1400 and 1500 just now.
The crowd outside are clamouring for bread, Marie Antoinette. "Let them eat soixante-neuf M exponents."[/QUOTE] I am accusing Chalsall of deliberately impeding the LL assignment rate and losing potential LLers. Justify yourself Halsall, or face the guillotine. Aand stop this pathetic abuse of your powers to delay my posts until you have thought up a reply, to make it appear that you are more quick-witted than I, and deny my post the consideration time by all readers due to it. |
[QUOTE=TheMawn;362567]Just when I thought I was out, they pull me back in. I left this debate weeks ago and here I am again. As soon as this gets the slightest bit stupid, I'm out again.[/QUOTE]You consider that sensible, I suppose.[QUOTE]To be perfectly honest, I don't give two bits what order I do my LL in. Granted, it's kind of fun to be assigned a 57M that gets done in a ~ridiculously short time.[/QUOTE]Don't tell me you are using you GPU for LL work.[QUOTE]I might be wrong, David, but my belief here is you're pretty much the only one who cares if they do a 68M and then a 62M instead of doing a 62M and then a 68M.[/QUOTE]Well that's good to hear. I have suggested that it would be good to have an elite squad of quick LLers to tackle the highest exponents being dished out.
I, and many others, will not take on a 68M exponent ATM. What we don't care about is whether it is TFed to 73 or 74 bits.[QUOTE]I completely agree with Chris regarding the notion of "It will get done eventually"[/QUOTE]Reply deleted by davieddy due to rudeness.[QUOTE] I think it's well established that we have the firepower to TF everything to 74 bits before LL.[/QUOTE]I think it is well established that we have the firepower to produce 300 virgin LL assignments per day. This matches the LL completion rate neatly. However the burning question is "Does it meet the demand for LL assignments?". This is being swept under the carpet ATM.[QUOTE]I think it's well established that nobody really cares what order the work gets done in.[/QUOTE]Neither would I if the daily assignments didn't demonstrate an overwhelming preference for the lowest exponent available, resulting a very orderly assignment until all hell breaks loose at 1400 UTC as a result of Chris panicking. David |
[QUOTE=davieddy;362536]The crowd outside are clamouring for bread, Marie Antoinette.
"Let them eat soixante-neuf M exponents."[/QUOTE] [QUOTE=davieddy;362548] SEND HER TO THE GUILLOTINE[/QUOTE] [QUOTE=davieddy;362585]and deny my post the consideration time by all readers due to it.[/QUOTE] Yes, please stop. My day would have been much richer had I had more time to consider these. |
[QUOTE=owftheevil;362556]Actually, some of us want to do it in descending order, but we can't figure out how to count backwards from [TEX]$\omega$[/TEX].[/QUOTE]
:goodposting: hahaha |
[QUOTE=davieddy;362585]I am accusing Chalsall of [B][U]deliberately [/U][U]impeding[/U][/B] the LL assignment rate and losing potential LLers.
Justify yourself Halsall, or face the guillotine. Aand [B][U]stop this pathetic abuse of your powers to delay my posts[/U][/B] until you have thought up a reply, to make it appear that you are more quick-witted than I, and deny my post the consideration time by all readers due to it.[/QUOTE] A) Exactly what benefit would Chris derive from holding up the project? As to loosing workers, David, you have a more distinct record in that regard. B) How do you know that it is chalsall who is moderating your posts? There are others with that much power or more who have taken exception to the tone of some of your remarks. Most recently, and expressly, George did so. |
[QUOTE=kladner;362616]A) Exactly what benefit would Chris derive from holding up the project? As to loosing workers, David, you have a more distinct record in that regard.
B) How do you know that it is chalsall who is moderating your posts? There are others with that much power or more who have taken exception to the tone of some of your remarks. Most recently, and expressly, George did so.[/QUOTE] I shall reply when several posts I wrote with care this morning are actually published. On second thoughts I'll reply now, since it is you. A) He might keep the flying pigs (TF to 74) airborne. Why has he not supplied an explanation for his actions different from mine? Pete was extremely rude and ignorant. He talked himself into quitting the project. Pity about the firepower, but I say good riddance, and that goes for Chris too. OK, keep GPUto72 as a powerful team, but let other GPUs into the project. Primenet could and should manage TF by GPU very easily. B) Reply deleted by Malcolm Tucker. Reason: rudeness. Merry Christmas [SPOILER]YOU MASSIVE GAY SHITE[/SPOILER]. x D |
[QUOTE=davieddy;362619]Reason: rudeness.[/QUOTE]
Merry Christmas. Your post in the moderation queue starting a new thread about whether LL users should have access to the smallest exponents will not be approved. Your demand that you be taken off moderation is a non-starter. As to your recent insulting behavior, this is my notice to you that your next rude and insulting post will result in a lifetime ban. I've never done that before - you can be my first. |
[QUOTE=chalsall;362289]A [URL="https://www.gpu72.com/reports/current_level/"]bit less than 11,000 LL candidates[/URL] below 57M are only TFed to 72 (read: most of them). With the average "abandonment rate" of ~ 80%, we could process most of them over time. Although, I imagine this rate will be lower because of the "preferred" status of the very lowest range.[/QUOTE]
As many of you will have noticed, GPU72 has begun the process of bringing in recycled candidates between 54M and 57M TFed to 72 "bits", and making them available for TFing to 73. I have also updated the [URL="https://www.gpu72.com/reports/estimated_completion/primenet/"]Estimated Completion Report[/URL] to reflect this new effort. At our current 30 day rolling LLTFing rate, we could process the 6,919 candidates in this range in 9.8 days (~700 candidates per day!). Of course, we won't get access to all of these, and those we do only over the next two months or so (a hundred or so a day). Also, while I'm writing, we've almost finished off bringing 62M up to 74 from 73. An extra 165 candidates eliminated by doing this work. Thanks for the suggestion David! :smile: Edit: While I was typing this Oliver dumped his week's work. This now [URL="https://www.gpu72.com/reports/current_level/"]fully completes [/URL] taking 63M to 74! |
[QUOTE=chalsall;362667]As many of you will have noticed, GPU72 has begun the process of bringing in recycled candidates between 54M and 57M TFed to 72 "bits", and making them available for TFing to 73.
[/QUOTE] I would like to chew on those, however the Get Assignments page won't let me! This might be related to the hack you put it to stop me from getting the 50M exponents a month or two ago. |
[QUOTE=chalsall;362667]Also, while I'm writing, we've almost finished off bringing 62M up to 74 from 73. An extra 165 candidates eliminated by doing this work. Thanks for the suggestion David! :smile:[/QUOTE]
No problem. I just thought that it was work which had to be done eventually, and would help meet the demand for LL assignments. :davieddy: |
[QUOTE=chalsall;362667]Also, while I'm writing, we've almost finished off bringing 62M up to 74 from 73. An extra 165 candidates eliminated by doing this work. Thanks for the suggestion David! :smile:[/QUOTE]No problem.
FB told me so many times that that TF to 74 >63M was getting so far ahead of LL assignment that I drew the logical conclusion that it must be true. At first I thought "That's a Good Thing[SUP]TM[/SUP]. Let's keep up the good work until the lead is big enough to contemplate starting on 75 bits, even though initially it won't keep pace". The I thought "No. George and Chris want to tidy up their legacy <63M. Perhaps I should encourage them. Then they will love me even more than they do already". |
Be sure that you set all the fields appropriately. I had a problem until I remembered to reduce the "Factor To" value to 73. You might also try setting an upper range limit, and/or selecting "Lowest Exponent".
|
[QUOTE=kladner;362702]Be sure that you set all the fields appropriately. I had a problem until I remembered to reduce the "Factor To" value to 73. You might also try setting an upper range limit, and/or selecting "Lowest Exponent".[/QUOTE]
Correct. Set the "To" value to be 73, and select "Lowest Exponent", and the 50Ms will appear in the preview panel (if any are available, of course). |
[QUOTE=kladner;362702]Be sure that you set all the fields appropriately. I had a problem until I remembered to reduce the "Factor To" value to 73. You might also try setting an upper range limit, and/or selecting "Lowest Exponent".[/QUOTE]
That did it :) |
[QUOTE=Prime95;362143]This doesn't pass a simple common sense test. Pose the question this way: Would GIMPS be better off if your GPU took two 58M exponents from 72 to 73 or one 64M exponent from 73 to 74?
1) The 2 58M TFs take only a little more time than the 1 64M TF. 2) The 2 58M TFs have a little more than twice the chance of finding a factor. I claim the extra iterations and higher FFT length of a saved 64M LL test does not compensate for the nearly double chance of saving one 58M LL test.[/QUOTE] I would say the overriding decider was that taking the 64M expo to 74 bits created a virgin LL assignment. Furthermore neither are worth the effort. [QUOTE=Prime95;362626]Merry Christmas. Your post in the moderation queue starting a new thread about whether LL users should have access to the smallest exponents will not be approved. Your demand that you be taken off moderation is a non-starter. As to your recent insulting behavior, this is my notice to you that your next rude and insulting post will result in a lifetime ban. I've never done that before - you can be my first.[/QUOTE] Perhaps you and Chris might delegate your moderation to others - Phil, Paul and William spring to mind - since you obviously have a conflict of interests in this thread. BTW a lifetime ban sounds exceptionally lenient in my case:smile: D |
[QUOTE=Mark Rose;362722]That did it :)[/QUOTE]
Cool! :tu: |
[QUOTE=kladner;362763]Cool! :tu:[/QUOTE]:threadhijacked:
|
[QUOTE=davieddy;362753]Perhaps you and Chris might delegate your moderation to others - Phil, Paul and William spring to mind - since you obviously have a conflict of interests in this thread.[/QUOTE]Fine by me. I check this thread most days.
|
[QUOTE=davieddy;362753]I would say the overriding decider was that taking the 64M expo to 74 bits created a virgin LL assignment.[/QUOTE]
What is your issue with "recycled" vs "virgin"? Sincerely, I don't understand the point. [QUOTE=davieddy;362753]Perhaps you and Chris might delegate your moderation to others - Phil, Paul and William spring to mind - since you obviously have a conflict of interests in this thread.[/QUOTE] Speaking for myself, I haven't edited [b]any[/b] of your posts. Here or elsewhere. (Or, at least, not for several months.) And I'm happy to delegate the moderation in this thread to others. For the record, I haven't approved / decided what not to approve in this thread for several days now. |
[QUOTE=chalsall;362819]What is your issue with "recycled" vs "virgin"?
Sincerely, I don't understand the point. Speaking for myself, I haven't edited [B]any[/B] of your posts. Here or elsewhere. (Or, at least, not for several months.) And I'm happy to delegate the moderation in this thread to others. For the record, I haven't approved / decided what not to approve in this thread for several days now.[/QUOTE] @mods. Please don't print my earlier reply. Reasons: 1) I couldn't delete/edit it 2) It was insufficiently insulting. 3) I am starting to get a bit tired of potty-training Chris.- |
[QUOTE=chalsall;362819]Sincerely, I don't understand the point.[/QUOTE]
Are you suggesting that you are occasionally insincere? Are you suggesting that I might have been under the delusion that you ever understood the point? @Paul: THX. I shall aspire to your standards of wit, which I used to do prior to all this aggro. Hope your medical state is on the mend, Mine is as implied in my comment on a life time ban. It would be fun to run into you sometime between 4-6th April. David |
[QUOTE=chalsall;362819]What is your issue with "recycled" vs "virgin"?
Sincerely, I don't understand the point.[/QUOTE]Neither does George. I think Aramis and Garo do, but I hesitate to say it because last time I did, Garo hit me very hard. There is concensus that if we directed [B]all [/B]firepower at virgin exponents, we can just about meet demand for assignments, although I have this strong hunch that this demand is being suppressed by your antics. [B]Why then do you (egged on by George) react by grabbing every recycled expo in sight, when you don't see why this does nothing to satisfy demand for LL assignment?[/B] I'll answer that for you. You wish to conceal the bollocks you have made of TF by GPU to date. [B]THAT IS MY ISSUE.[/B] David |
David, what you are saying does not make any sense at all! You have said before that you like to do LL testing on the smallest available exponent if possible. Why on earth would you prefer to be assigned a 60M with TF done to 72 bits than to 74 bits. The whole purpose of catching recycled exponents is to make sure nothing is assigned for LL testing without sufficient TF done. Catching recycled exponents makes complete sense especially since most exponents end up expiring 2-3 times before they are eventually completed. And it improves project throughput thus bringing the next prime find closer than if we had followed your virgin exponent obsession!
Shouting and insulting other people does not change this simple fact. |
[QUOTE=garo;362986]David, what you are saying does not make any sense at all![/QUOTE]Fighting talk.[QUOTE] You have said before that you like to do LL testing on the smallest available exponent if possible.[/QUOTE]Indeed. And so does almost any LL tester.[QUOTE] Why on earth would you prefer to be assigned a 60M with TF done to 72 bits than to 74 bits. [/QUOTE]Well that is academic since all expos >60M are TFed to 73+ thanks to Chris and George's insane idea of WMS[SUP]TM[/SUP].
If I were offered two 60M exponents, one at 72 and the other at 74 I would say "Why the **** were these not both TFed to 73?" [QUOTE]The whole purpose of catching recycled exponents is to make sure nothing is assigned for LL testing without sufficient TF done.[/QUOTE]Define sufficuent. If the bit level was deemed fit for a virgin, it should be fit for an old hag which has been round the block a few times.[QUOTE] Catching recycled exponents makes complete sense especially since most exponents end up expiring 2-3 times before they are eventually completed[/QUOTE]It is throwing good money after bad. It will get LLed eventually, and TFing a 54M (I ask you) from 72 to 73 does not add to the LL assignment pool. The effort would be MUCH better spent taking a 66M expo to 73[QUOTE]. And it improves project throughput[/QUOTE][B]RUBBISH[/B][QUOTE] thus bringing the next prime find closer than if we had followed your virgin exponent obsession![/QUOTE]Virgins for TFers. Us LLers prefer the more experienced candidate.[QUOTE]Shouting and insulting other people does not change this simple fact.[/QUOTE]You haven't been PMing Chalsall again have you? David |
| All times are UTC. The time now is 09:42. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.