![]() |
|
|
#34 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
100110001101102 Posts |
Quote:
Less than 300 LL candidates are LLed each day (on average). More than 300 LL candidates are TFed to at least 74 "bits" each day (on average) not counting those eliminated by TFing. QED. |
|
|
|
|
|
|
#35 | |
|
"Lucan"
Dec 2006
England
647410 Posts |
Quote:
Your antics are a disgrace, and TF is is a shambles. The correct remedy is simple. but won't be as quick as you imply. The situation gets worse as you delay longer. You are almost certainly impeding LL assignment To put it in a nutshell, here is the situation as it could and should have been: A clearcut maximum assignment level just abpve 64 M. All exponents below 67M TFed to 73. TF to 74 well underway above 67M. Fingers crossed that assigment will not overtake TF around 75M, Wait till TF has got a decent buffer at around 84M, then consult me about whether to TF to 75 yet. Since you have delayed so long, it will be around 70M before TF t0 74 becomes wise. In short, take a long (preferably permanent) break Chris and leave that simple decision to someone who thinks about the question. Your arrogance is truly astounding. When Garo says bluntly "TF is clearly not keeping pace. Release all remaining 62/3 M exponents immediately" just do it. David (justifiably infuriated). |
|
|
|
|
|
|
#36 | |||
|
P90 years forever!
Aug 2002
Yeehaw, FL
19·397 Posts |
Quote:
Assuming I understand your worldview and chasall's, I'll try to restate them. You can correct me if I've misunderstood. David's world: 1) TF and LL assignments should not overlap. 2) If we had implemented David's ideas months ago, in today's ranges that would mean roughly the bulk LL testers would be testing in the 60-64M range with exponents that have been TF'ed to 73. TF'ers would be taking the 64-68M range to 2^74. 3) Several months from now, LL'ers would be working on 64-68M and TFers would be in the 68-72M range. And so forth for the rest of time. Chasall's world: 1) TF and LL assignments are allowed to overlap in an effort to give LL'ers exponents that have been TFed a smidge more. 2) This has resulted in todays status. The bulk of both LL'ers and TF'ers are getting exponents in the 60-68M range. 3) Several months from now, since TF firepower is only a smidge above needed both the LL'ers and TF'ers will be in the 64-72M range. And so forth for the rest of time. We can all agree that the LL tests in every range must be done eventually. So which worldview is better? The answer in my opinion is neither. In David's world, the LL testers gain psychologically in knowing they are testing the smallest exponent available. Smaller exponents are faster and have a better chance of finding a new prime, thus perhaps keeping interest levels higher and attrition lower. On the other hand, these are rather small gains that are impossible to quantify. In Chasall's world, today's LL testers always get an exponent that has been TF'ed to 2^74 (note that in both David's and Chasall's worlds users get exponents TF'ed to 74 starting in several months with the 64-68M range. The downside is that LL testers and TFers are working in an overlapping range for the foreseeable future. This does reduce the total amount of LL work expended by GIMPS because of the several dozen factors that were/will be found TFing to 74 in the 60-64M range. If we wanted to adopt David's worldview, David is correct that any delay costs GIMPS. This is because doing the conversion means one range of 4M will be factored one bit less than optimal. You'd rather that 4M range be on smaller exponents because LL tests on larger exponents are more expensive. Quote:
Chasall's disagreeing with you is not arrogance. His debate style on the other hand can be -- as can yours. Quote:
I personally have no great preference between the two positions. I have told Chasall that adopting his way was OK with me -- I am somewhat of an optimization nut. If you can convince me and others that your ways are superior in ways that I have missed, Chris and I will be happy to implement them. |
|||
|
|
|
|
|
#37 |
|
"Mr. Meeseeks"
Jan 2012
California, USA
87816 Posts |
Frankly, nether may be better but chalsall and daviddy certainly don't think so...
|
|
|
|
|
|
#38 |
|
May 2013
East. Always East.
11×157 Posts |
David: Between my two last posts there is in fact a good thorough explanation of your position which somehow flew under my radar. You must have published that during the time of me writing my second one and I missed it.
I'm sorry if it sounded like I was deliberately ignoring what you're saying. That's about the opposite of what I wanted to convey. Now, the way I understand what Chalsall says is there IS enough firepower to keep the LL-tests fed AND take potshots at the lower range. I'm taking this to be true. "Show us the math" refers to your conjecture that there is NOT enough firepower. If we do in fact have enough firepower to handle both tasks then it is guaranteed that all exponents will eventually get done, rather than having a few dozen thousand just sitting around for ever. Given that fact, then it only remains to be determined if it is economical to continue trial factoring. The whole "effort to TF vs effort to LL" optimization has been made a bit trickier since GPU's got involved, is my understanding. I see that TF gets roughly 10 times the GHz-Days as an LL-test, on the GPU, so I would have to conclude that we could increase the optimal amount of TF work to reflect this. I think a "wall-time" approach would be better given the sheer amount of TF work a GPU can do. TF to 74 bits might not be the most optimal setup, on a GHz-Days basis, but we have so much more firepower because of the GPUs. We could carry on with the GHz-Days approach but we'd probably be a hundred thousand exponents ahead. TL;DR: If we have the firepower to attack both the wave-front and the low 60's that we want to TF to 74, then all exponents will eventually get looked at. Do you have a problem with the notion that we do in fact have sufficient firepower? If not, then do you have a problem with the idea that TF to 74 is inefficient? I have two GPUs. In the last nine days, they together found 7 factors. Let's say one factor every three days for one GPU. Instead, I could set my GPU up to do LL tests. At 20 GHz-Days per day, it would take a week to find one conclusive result (I get about 150 GHz-Days of credit per LL test I submit). My GPU can find one result per three days (TF to 74 bits) or one result per week (LL). Which one should I pick, David? Last fiddled with by TheMawn on 2013-09-29 at 03:51 |
|
|
|
|
|
#39 | |
|
"Lucan"
Dec 2006
England
2×3×13×83 Posts |
Quote:
If we had the firepower, it would be worthwhile for a GPU to TF to 75 or maybe even 76 bits before even thinking of using it for LL. What we are getting so exercised about is the matter of what bit level makes most sense for the benefit of GIMPS. Although people are free to do anything they like, many opt for "What Makes Sense". Now Chalsall is not sensible, and is anyway biassed by his wish to promote his GPUto72 mob's achievements and potential at all costs. George is basically occupied with attempting to keep.the peace and of course improving Prime95. That really leaves me. There are of course several people who could point out any shortcomings but as you noted in your first sentence, my determination to get my simple TF stategy across is thwarted by witholding my posts until they are out of date, and Chalsall's near fanatical determination to do the opposite of what I suggest, and humiliate me with all means at his disposal which are far too many, and his conduct is unbecoming of a supermod. As a very assiduous and objective observer of the project and a fist class physics of 60 years experience,none should dispute my conclusions without very careful consideration. This is where Chris and I diverge the greatest. I can spot how he plays fast and loose with statistics, and as I am sure some of you are aware, this practice is anathema to any physicist worth his salt. Most emphatically, I dispute his "broken record" assertion that TF to 74 is miraculously precisely keeping up with demand for LL assigments. This could be simply tested if (as he should have done months ago) released all exponents TFed to 73. The blindingly obvious explanation for why the reserve of exponents TFed to 74 bits remains negligible is that assignments would be faster if it were not for being held up by TF, and it is easy to see why this would happen: folk are not keen to take on 67M exponents ATM. TF should be clear of all assigments. And for the umpteenth time, since most of the TF is 73 to 74 bits. Why on earth does he think and persistantly assert that TF from 71 to 74 will keep pace. This is a blant lie, and we are deep in "The emperor's new clothes" territory. A 50,000 buffer is what I estimated, before we consider any further TF to 74. The time that takes is really irrelevant, but since we won't go to 75 bits until we reach ~85 M, four months is not much. I know Chalsall tries to mock me for this 85M estimate but that's what 68M*1.26 is. What is ridiculous, and suggests he is clueless, is his suggestion that we could TF 65M exponents to 75. What is "sub-optimal" is having to TF expos > 65 M to 73 while lower ones were miguidedly TFed to 74. Its just like 15,000 expos >54 TFed to less than 72 while a similar number <53M were TFed to 72. Since GPUto72 was chosen as the level at which all exponents >54 could be taken to in comfort, (I know because I originated the idea) I have not forgiven Chris for this sub-optimal state of affairs, and he still hasn't learned the lesson. Delegate the theory to me. David |
|
|
|
|
|
|
#40 | ||
|
"Lucan"
Dec 2006
England
2·3·13·83 Posts |
Quote:
Phew! A hint of levity in the proceedings.Me too, and that goes back to my assembly programming days. Although I was regarded as something of a wizard, I would not dream of competing with you in that sphere now. However, I've got a novel way of viewing TF strategy which may convince you that my way is the best, although if it does, you may have to work hard to convince Chris. Quote:
There is an exponent x where the TF to 73 and LL assignment rates are equal. At exponent x*21/3 we expect this apply to TF to 74. Now Philmoore came up with the simple suggestion that we could keep pace with LL between these expos by TFing a mixture of 73 and 74 bits in which the proportion of 75s increases from 0 to 1 as the expo goes from x to 1.26 x. (He was trying to suggest that this was optimal, and Chris was on the right) track). However, sensible and plausible as the idea sounds, it is very easy to refute: We have a mix of 73 and 74 bits throughout the range. Take all the 73s and pack them in the bottom of the range leaving all the 74s at the top. I'll leve it to you to see why this wins on all fronts. Notably a sizeable gap will have opened up at the midway point. CONVINCED? Now I just have to hope that this actually gets posted before I pop my clogs.
|
||
|
|
|
|
|
#41 |
|
Romulan Interpreter
Jun 2011
Thailand
26×151 Posts |
You all missed the most important part. The fire-power of the GPU72 team.
The most of the "big guns" there only participate to GIMPS with some percent of their fire power (because of different reasons, either they try to avoid high electricity bills, or their hardware is busy with real-life work, etc), and from that, only partially TF. We proved in the past what a "sustained marathon" can do. When the "well TF-ed" exponents will become scarce, people will "buy" them, like in forex... hehe... I personally didn't do a TF for months... And the winter is coming... (i.e. cool outside, need some fire-power to heat our bedrooms... )...All this discussion and fear is just a hurricane in a glass of water... edit: Don't feed the trolls! (they are usually picking on new-comers, be careful and don't fall in the trap!) Last fiddled with by LaurV on 2013-09-30 at 02:53 |
|
|
|
|
|
#42 |
|
May 2013
East. Always East.
32778 Posts |
I was wondering who would break the silence. For a minute there, I had hoped... but no. It would have been wonderful to hear from him again.
|
|
|
|
|
|
#43 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2×67×73 Posts |
Quote:
David, you come across a bit like a theoretical physicist who never considers the empirical evidence (or, even worse, rejects it because it doesn't fit the theory)... To say it again, this report shows that we can take everything between 60M and 67M to 74 in ~220 days, while it's going to take PrimeNet ~573 days to LL everything in that range. Please note that this table INCLUDES the work needed for every bit level, including 71->72, 72->73 AND 73->74. This is a real-time report, taking into account the number of candidates to process, the cost of the work needed for each individual candidate, and the amount of firepower both GPU72 and PrimeNet have had available over the last 30 days. It is updated five times an hour (every 10 minutes except at the top of the hour). It is clear FROM THE EMPIRICAL EVIDENCE that the GPU72 sub-project can do what we've proposed. If the situation ever changes (e.g. PrimeNet suddenly more than doubles its available firepower, or GPU72 suddenly loses a great deal of its) we can quickly adapt; in addition to the automatic safety valves, I personally review the situation at least once a day. Last fiddled with by chalsall on 2013-09-30 at 14:11 Reason: s/this graph/this report/ |
|
|
|
|
|
|
#44 | |||||
|
May 2013
East. Always East.
11×157 Posts |
Quote:
Quote:
And that comment I made... it rather disgusts me. It was absolutely unnecessarily smug of me. I apologize. Quote:
Quote:
Quote:
|
|||||
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| LL GPU error-detection side computation | preda | GPU Computing | 14 | 2017-05-09 07:47 |
| Sieving both sides vs one side at a time | paul0 | Factoring | 5 | 2015-11-18 13:58 |
| Side Topic: 'It's Not a Toom-ah' | R.D. Silverman | Math | 68 | 2015-11-15 04:21 |
| If you ever wanted to hear what a broken record sounds like... | Unregistered | Lounge | 40 | 2013-10-06 21:53 |
| mfaktc and CUDALucas side-by-side | TObject | GPU Computing | 2 | 2012-07-21 01:56 |