![]() |
[QUOTE=chalsall;336800]David... Over the last year you've done a [URL="http://www.mersenne.org/report_top_500_custom/?type=1003&rank_lo=1500&rank_hi=1510"]grand total of 637.049 GHz days of LLing[/URL].
While every cycle is valued, over the same period the project you are so quick to disparage has [URL="https://www.gpu72.com/graphs/ghzdays_saved/year/"]saved almost five times that amount every single day[/URL] (and, also, saved the associated DC -- aggregate of ~10 times). What, exactly, is your agenda in your constant complaining that we (including George) aren't doing things "correctly"? I'm more than happy to listen to reasoned arguments. But mother-in-law like hysteria doesn't go over well.[/QUOTE] Sadly, Chris, facts mean nothing to David... his fantasy world is far superior to our mundane life. I personally think it funny how he removes the doubt 4x more often than I do. I do have a couple of questions that you can probably easily answer which may help this nonsensical arguement. In calculating out the following, use the assumption that there will be no increase or decrease in either CPUs or GPUs during the timeframes being calculated. 1) As of today, what is the approximate exponent handed out for a) LL and b) LL-TF? 2) At our current rate of LL-TF completions to 74 vs current rate of LL completions, on what date will we exhaust our 32 day lead? 3) On the date given in 2), what is the estimated exponent handed out for both a) LL's and b) LL-TF? 4) Using the 2 exponents garnered from 3). what is the approximate increase in LL runtime versus current runtime and the approximate decrease in LL-TF runtime versus current LL-TF runtime? 5) Using the results from 4), on the date found in 2), will the LL-TF completion rate increase from running higher exponents be enough to surpass the LL completion rate decrease from running higher exponents? If the answer to 5) is yes, then simple logic would indicate David's arguement is moot and we should continue LL-TF to 74 bits. Conversely, if the answer is no, then we need to extend the current 32 day grace period. |
[QUOTE=chalsall;336812]And you've milked it for everything it's worth, even though I apologized.[/QUOTE]
Milked what eactly? Ihe numerous responses from a variety of respected forumites were directed at you calling someone "slow and stupid" and mocking his hardware ownership. They all refrained from addressing the question of whether the accusation had any truth in it, being completely beside the point. I allowed your blanket apology to stand for a couple of days, before suggesting it was bogus in as witty a way I could muster under the circumstances. You have proved my assessment of its sincerity to be spot on. When you are in a hole, [B]STOP DIGGING.[/B] |
[QUOTE=chalsall;336800]I'm more than happy to listen to reasoned arguments.[/QUOTE]
On the contrary, you have clearly stated the exact opposite on several occasions. |
[QUOTE=bcp19;336815]Sadly, Chris, facts mean nothing to David... his fantasy world is far superior to our mundane life.
I personally think it funny how he removes the doubt 4x more often than I do. I do have a couple of questions that you can probably easily answer which may help this nonsensical arguement. In calculating out the following, use the assumption that there will be no increase or decrease in either CPUs or GPUs during the timeframes being calculated. 1) As of today, what is the approximate exponent handed out for a) LL and b) LL-TF? 2) At our current rate of LL-TF completions to 74 vs current rate of LL completions, on what date will we exhaust our 32 day lead? 3) On the date given in 2), what is the estimated exponent handed out for both a) LL's and b) LL-TF? 4) Using the 2 exponents garnered from 3). what is the approximate increase in LL runtime versus current runtime and the approximate decrease in LL-TF runtime versus current LL-TF runtime? 5) Using the results from 4), on the date found in 2), will the LL-TF completion rate increase from running higher exponents be enough to surpass the LL completion rate decrease from running higher exponents? If the answer to 5) is yes, then simple logic would indicate David's arguement is moot and we should continue LL-TF to 74 bits. Conversely, if the answer is no, then we need to extend the current 32 day grace period.[/QUOTE]Cut the BS. Chris's useful tables show clearly that TFing to 74 will not keep pace with LL completion ATM. Furthermore, the gap between TFing and the highest LL allocations should be larger for comfort: 1) Because the LL allocation front is advancing worryingly erratically. 2) At least give P-1 a chance. David |
[QUOTE=davieddy;336823]1) Because the LL allocation front is advancing worryingly erratically.
2) At least give P-1 a chance.[/QUOTE] 1) Is that not due to the LL's being recycled because people gave up on them after 'the big announcement' rush? 2) A 4% to 6% chance of finding a factor is a chance, no? Why not get rid of as many candidates as practical with GPU-TF, so that there is a greater chance that all exponents will get a decent P-1, even if the LL'er has a only 40M set aside for stage 2? The more candidates that GPU's clear, the more meaningful each CPU-LL is. :hamster: |
[QUOTE=davieddy;336823]Cut the BS.
Chris's useful tables show clearly that TFing to 74 will not keep pace with LL completion ATM. Furthermore, the gap between TFing and the highest LL allocations should be larger for comfort: 1) Because the LL allocation front is advancing worryingly erratically. 2) At least give P-1 a chance. David[/QUOTE] I agree with 2), P-1 has always seemed to be an effective use of resources within recent memory, but the argument against TF to 74 seems to be, in essence, that maybe 73.7 or 73.8 would be more efficient. Given that the present arrangement of Primenet seems to bias in favor of integer level TF levels, could we accept that TF to 74 in some, but not all cases, can still be a positive contribution? I still fail to see that a sufficiently high exponent factored to 74 bits has been wastefully trial factored. (Let's keep this in perspective, TF at this level still only eliminates 1.3% of the remaining exponents.) Granted, in an ideal world, every exponent would be factored to 73 bits before starting 74 bits, but in a volunteer environment, respecting reservations, we do what makes sense at the moment. What would you suggest we do differently, David? |
Two years ago I was being mocked for suggesting that
1) TF to the agreed feasible/optimal level should be done before exponents are ever allocated for LL. 2) It is sensible to TF to the agreed level in one assignment. This is now standard GPUto72 practice. There is obviously no reason why a group of like-minded (read "stupid") GPU owners should not form a team, and compete among themselves if they so wish. But there is no reason why Primenet shouldn't allocate TF for GPUs, in the same way it allocates everything else, notably first time LLs to Core2s or faster. It is the sprawling "one bit at a time" which has resulted in the late TFing of thousands of expos. Having two sources of TF allocation has to be the worst arrangement. David |
[QUOTE=davieddy;336827]Having two sources of TF allocation has to be the worst arrangement. [/QUOTE]
I'm surprised to read you writing that, considering past protests of gpus taking all of the easy work away from the people with slow hardware. One of my favorite thing about having 2 sources for trial factoring is that one (gpu72) is marked to a bitdepth that is impractical for cpus but fine for gpus, while primenet releases tf candidates clearly scaled for cpus. |
[QUOTE=Aramis Wyler;336837]I'm surprised to read you writing that, considering past protests of gpus taking all of the easy work away from the people with slow hardware. One of my favorite thing about having 2 sources for trial factoring is that one (gpu72) is marked to a bitdepth that is impractical for cpus but fine for gpus, while primenet releases tf candidates clearly scaled for cpus.[/QUOTE]Phew!
A calmly written post at last. Lively debate is often helpful in resolving a soluble problem - a slanging match is not. The point you make is not really the one we are discussing ATM. I shall return to it later. If 74 bits for exponents >63M were both feasible and worthwhile, I would say make it the default. However we are unanimous that 74 bits for all expos >63M means that TFing will be below the asking rate. IMO this alone should mean "end of story - no can do". Although the "worthwhile/optimal" bit level is not very clear-cut, we all seem content that 57M was a sensible point to raise the level from 72 to 73. I am aware that I lost you here last time, but the bitlevel should be upped by one everytime the exponent increases by a factor of 1.26. 1.26*57M ~ 72M. 1.26[SUP]3[/SUP] = 2. When the exponent is doubled the time for an LL increases 4 fold. The same goes for TFing three bits higher with the exponent doubled. David |
[QUOTE=bcp19;336815]If the answer to 5) is yes, then simple logic would indicate David's arguement is moot and we should continue LL-TF to 74 bits. Conversely, if the answer is no, then we need to extend the current 32 day grace period.[/QUOTE]
Actually, because of the way Primenet assigns work (and the way the participants complete it (or not)) your above analysis is almost impossible to do in the manner you describe. (As an aside, I don't really think the assignment methodology could be greatly altered given that this is a volunteer effort, with the possible exception of assigning "Anonymous" work at the wavefront -- and expiring candidates which are over a year old.) However, your questions got me thinking about how best to settle this "debate" once and for all, using quantitative data rather than hysterical ranting. Thank you for that. Over the last [URL="http://www.mersenne.info/exponent_status_tabular_delta_30/1/0/"]30 days[/URL] Primenet's workers have completed approximately 9,100 LL assignments, or 303.4 a day. (The exact number is hard to determine for various reasons, but this is certainly accurate to within 1%.) The average completion is somewhere in the 58M range. So going by [URL="http://www.mersenne.ca/credit.php?worktype=LL&exponent=58000000"]James' calculator[/URL], each one took on average 122.93 GHz days, or a Primenet average of ~37,297 LL GHz Days / day. I took this average, and extended the [URL="https://www.gpu72.com/reports/estimated_completion/primenet/"] Estimated Completion[/URL] report to show both the number of days GPU72 is reasonably expected to take to TF, vs. how many days Primenet is reasonably expected to take to LL. The numbers speak volumes: ~191 days to TF everything below 66M appropriately; ~706 days to LL everything below 66M. Now, then, based on this, the only real question is how much of a lead time does GPU72 have over the LL wavefront... Based on the data available last night at 0010 UTC (just before Primenet started recycling abandoned candidates), there were approximately 16,330 available for assignment for LLing or P-1'ing, [B][I][U]or[/U][/I][/B] already assigned for P-1'ing (by both Primenet and GPU72) below 65M. Taking into account the work available, [B][I][U]and[/U][/I][/B] assigned for P-1'ing (read: already appropriately TFed -- reasonably expected to complete or be recycled) we are actually ~53.8 days ahead. Or, executive summary: I remain comfortable we can complete the goal of TFing >63M to 74 bits without hindering LL assignments in any way. And, in fact, we may be able to start going to 75 in a few months. P.S. For clarity, while the Primenet LL average is only an approximation at the moment, the calculated estimate for both TFing and LLing is based on the actual number of GHz days required for each individual candidate. P.P.S. I will be able to make the calculated 30 average of Primenet's LL performance more accurate (it will be lower) and updated daily by adding a data-tap to Mersenne.info. As in, it will remain accurate over time. |
[QUOTE=Aramis Wyler;336783]The case I make is such:[LIST][*]none of us are omnicient nor infallible,[*]There is disagreement among us with regard to our capacity to TF to 73 or 74 at this time,[*]therfore the matter can be in dispute.[/LIST][/QUOTE]
[QUOTE=davieddy;336851] If 74 bits for exponents >63M were both feasible and worthwhile, I would say make it the default. [B]However we are unanimous that 74 bits for all expos >63 means that TFing will be below the asking rate.[/B] IMO this alone should mean "end of story - no can do". [/QUOTE] [I]emphasis mine[/I] We are not unanimous that 74 bit for all expos > 63M means TF'ing below the asking rate. Chalsall's post listed details why he believes that we can do 74. You may well believe he is wrong, [I]but that only means that the matter is in dispute.[/I] The matter is in dispute. Chris thinks we can make 74 work, you think we can't, I think it is possible but have doubts. We've got all the bases covered for opinions; we do not have concensus. I appreciate that you're trying to argue your point that you're right and Chalsall is wrong. I appreciate that Chalsall is trying to argue his point that he is right and you are wrong. In a perfect world that may have resolved the issue, but it has broken down to mudslinging, bitching, misdirection, and redirection. All is not lost though, because we have at least 27 days of lead time to play with. (27 days is my number, my opinion. I got it by subtracting 5 from the 'official' 32 day lead time for reasons I'm not going to disclose right now) It is prudent, [B]since the matter is clearly in dispute and argument will not resolve this dispute[/B], to let it play out for a few months and then review. We will only lose a small percentage of our lead time doing so. It will not hurt, and the venom can be removed from the forum for the time being. |
| All times are UTC. The time now is 09:40. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.