![]() |
[QUOTE=kladner;408532]I imagine Davieddy rubbing his hands and muttering, "Fools! I told you so!" This, and related issues were always favorite hobby horses of his. :davieddy:[/QUOTE]
I don't really know much about other distributed computing projects. Do any other ones require that new systems "prove themselves worthy" in some way before they can get cooking on important stuff? I'm just thinking to myself that 3-4% (or even if it's just the known 1-2%) is a pretty high error rate for any other endeavor... it wouldn't be tolerated in many situations, and it's just a good thing that we double check our work (and hopefully not double checking our own work) :smile: |
I think rather than punish the vast majority who have good systems, it would be best to prioritize double checks of work done by systems with no prior matching result. That could be done by by giving those assignments in priority to the Cat 4 DC workers who have completed at least one assignment with a matching residue. Let the new Cat 4 DC'ers get assignments from machines with at least one good result to give them the best chance of turning into proven workers. Errors will still happen, but this strategy would probably cut down on the quadruple checks.
|
[QUOTE=Madpoo;408574]I don't really know much about other distributed computing projects. Do any other ones require that new systems "prove themselves worthy" in some way before they can get cooking on important stuff?[/QUOTE]
In many other DC projects you can verify the solution once you have it. No, I don't talk about bitcoin, hehe, but for example, think about fold-it, it takes ages to roll that protein around itself, but when you did it, the solution is plain and clear, and easy verifiable. Nothing like Lucas Lehmer test... |
[QUOTE=Madpoo;408529]
Some other thread has my gory details about getting GMP-ECM working well with having Prime95 doing stage 1 and feeding that to GMP-ECM. [/QUOTE] [URL="http://mersenneforum.org/showthread.php?t=20092&page=4"]http://mersenneforum.org/showthread.php?t=20092&page=4[/URL] |
[QUOTE=airsquirrels;408546]What is the current stat for how many double checks are started and never completed? I'm not sure credit would be as important for DC and the whole project would benefit from all the work of churners who abandon the current low granularity work units.[/QUOTE]
I haven't done a deep (read: highly accurate) query on that in a few months, but approximately 97% (+- 1%) of assignments to new users are never completed. |
[QUOTE=Madpoo;408530]Personally I think I'd force new accounts to do one or two double-checks first before they could do any first-time checks.[/QUOTE]
And that's now the case -- ever since George moved the Churners down to the DCTF Cat 4 range. |
[QUOTE=chalsall;408621]I haven't done a deep (read: highly accurate) query on that in a few months, but approximately 97% (+- 1%) of assignments to new users are never completed.[/QUOTE]
Is there data available to query how much work that is and how far those users get into it before abandoning? I'm curious if it's 97% of new users abandon their assignments/gimps but the lost work is only 1% or less of our throughput or if it is more significant. |
[QUOTE=airsquirrels;408635]Is there data available to query how much work that is and how far those users get into it before abandoning?[/QUOTE]
Not easily immediately available over a large temporal domain, but doing a query against [URL="http://www.mersenne.org/assignments/?exp_lo=37700000&exp_hi=40000000&execm=1&exfirst=1&exp1=1&extf=1"]Primenet like this[/URL] might give you a reasonable idea as to what we face. [QUOTE=airsquirrels;408635]I'm curious if it's 97% of new users abandon their assignments/gimps but the lost work is only 1% or less of our throughput or if it is more significant.[/QUOTE] Please note that I said 97% of assigned work, not 97% of new users. I've learnt (the hard way, over many years) that language is important.... :smile: |
A [URL="http://www.mersenne.org/report_exponent/?exp_lo=37769773&full=1"]random example[/URL] from Chris's query, this was dropped many times, and not yer completed, and well.. not exactly random picked, I cheated a bit, I picked it because it says over 23000 (!) days till completion, so there is actually no chance that it will be completed this time either. So, expect another drop... :smile:
|
[QUOTE=Prime95;408406]There is a downside to implementing this. The quality of the result is only as good as the worst computer to work on the LL test. If an overclocker puts a few million low quality iterations in, then a highly reliably machine may waste tens of millions iterations finishing the LL test.[/QUOTE]
Maybe use a points system? The more points an exponent file has, the less trusted it is. And when you start an exponent fresh, that's zero points, which is considered best. And then apply a simple math problem along with the predicted time it would take a new machine to complete from scratch, and voila, I've solved a problem in my mind that would probably take 100s of man hours to implement. (sorry, realized at the end how cheeky I sounded. But a trustworthiness algorithm, even a bad one, would be cool) |
I think madpoo and chalsall, among others, have put a lot of effort into defining reliability, and detecting it, or the lack thereof.
|
| All times are UTC. The time now is 21:19. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.