![]() |
I think that's a brilliant idea.
|
[QUOTE=chalsall;396322]Thanks George! Very helpful for the OCDs amongst us who don't have access to the raw DB. :smile:
May I suggest we start a discussion (either here, or on the original "new assignment rules" thread) about increasing the deltas on the different ranges for DC? I'd initially suggest something like Cat 1: 5000, Cat 2: 15000, Cat 3: 50000. The whole point of the new rules was to lessen the motivation for "poaching". Clearly this and other examples shows that Cat 3s may slide into the Cat 1 range well before the candidates are soon due for recycling, thus unintentionally creating a bottleneck. Actually, taken at the extreme, there could be an interesting self reinforcing situation wherein almost the entire Cat 1 range is actually old Cat 2s and 3s, while the fast / dedicated machines mostly clear out Cat 2s and 3s. We should really aim for a balance such that if a candidate moves into Cat 1 that it's just about to be recycled by the system. Thoughts?[/QUOTE] A useful step could be to recycle category 3 (and category 4) assignments that moved into category 1, and that have not been started. (Of course after communication with the server to make sure that no work has been done, and not immediately after they move into category 1). |
[QUOTE=chalsall;396322]May I suggest we start a discussion (either here, or on the original "new assignment rules" thread) about increasing the deltas on the different ranges for DC? I'd initially suggest something like Cat 1: 5000, Cat 2: 15000, Cat 3: 50000.
The whole point of the new rules was to lessen the motivation for "poaching". Clearly this and other examples shows that Cat 3s may slide into the Cat 1 range well before the candidates are soon due for recycling, thus unintentionally creating a bottleneck. Actually, taken at the extreme, there could be an interesting self reinforcing situation wherein almost the entire Cat 1 range is actually old Cat 2s and 3s, while the fast / dedicated machines mostly clear out Cat 2s and 3s. We should really aim for a balance such that if a candidate moves into Cat 1 that it's just about to be recycled by the system. Thoughts?[/QUOTE] Good idea. Since cat 2 users have 100 days and cat 1 users have 60 days, I don't think the extra 40 days will cause much poaching concerns. Cat 2 is for users with slower but still relevant computers. We don't want to make them feel unwanted by denying them access to too many of the smallest exponents (or maybe I worry too much?). Recommend first number be moved to 2000 or 2500. Cat 3 with the 240 day expiration will unfortunately present a poaching target when it slips into cat 1 territory. Change the second crossover to 15000 or 20000 (or more?). Cat 4? Not sure if that needs changing or not. I thought we chose 40000 because we had completing 40000 DCs a year. LL testing. We have the opposite problem, big swaths of unclaimed exponents. Change from 5000/15000 to 2500/7500??? If curtisc ever activates the trusted user bit, then we have some big adjustments to make! |
If DC categories are changed, it should be done gradually so there is enough DCTF capacity to keep up with the shifting cutoff.
It might make sense to put category 4 work two or three or more years out, and give as many years to complete, so slow computers have a good chance of finishing work before the exponents are recycled from being into the first category. Fast computers could quickly move into lower category work, so doing so won't delay verifying primes much if at all, but it will better utilize the slower computers than having their work recycled. |
[QUOTE=Prime95;396337]Cat 3 with the 240 day expiration will unfortunately present a poaching target when it slips into cat 1 territory. Change the second crossover to 15000 or 20000 (or more?).[/QUOTE]
240 days for Cat 3 seems ok but why: "Must be started within 180 days". Why are they allowed 180 days before they even have to start? Wouldn't 60 or 90 days be more than enough? |
[QUOTE=Prime95;396337]LL testing. We have the opposite problem, big swaths of unclaimed exponents. Change from 5000/15000 to 2500/7500???[/QUOTE]
Makes sense to me. [QUOTE=Prime95;396337]If curtisc ever activates the trusted user bit, then we have some big adjustments to make![/QUOTE] Indeed. |
[QUOTE=Mark Rose;396344]If DC categories are changed, it should be done gradually so there is enough DCTF capacity to keep up with the shifting cutoff.[/QUOTE]
Relax... We're well ahead if the DC'ers. But, if you want to help more, take some of 40M to 72. |
2 Attachment(s)
I used Excel to plot the cat boundaries at 2-week intervals starting March 1, 2014.
|
[QUOTE=ATH;396345]Why are they allowed 180 days before they even have to start? Wouldn't 60 or 90 days be more than enough?[/QUOTE]
Manual assignments. |
[QUOTE=cuBerBruce;396357]I used Excel to plot the cat boundaries at 2-week intervals starting March 1, 2014.[/QUOTE]
Sweet! Thanks for that. Sincerely. |
From what I can tell, Cat1 jobs must finish in 60 days and Cat3 jobs must finish in 240 days.
The reason Cat2 exists is to provide a step between Cat3 and Cat1 because THE smallest Cat3 job would end up in Cat1 within hours and this is not ideal because while it is an assignment we WISH to have finished in 60 days, it must have the promised 240 days instead. This is only an issue if a Cat3 assignment manages to slide all the way into Cat1. As long as this takes no less than 180 days to occur, we're golden. What is the current 180-day throughput of Cat1 + Cat2? That is what the Cat3 bound should be. |
[QUOTE=cuBerBruce;396357]I used Excel to plot the cat boundaries at 2-week intervals starting March 1, 2014.[/QUOTE]
I made the same graphs in a spreadsheet, by copying the data every now and then over the past year. The idea was to see how the new rules would lead to a stable and predictable handing out of assignments over time, after cleaning up the debris left behind by the old rules. As you can see, for DC assignments that happened in September and for first time LL tests, that will take till Cat 1 assignments reaches about 61M. |
[QUOTE=TheMawn;396371]This is only an issue if a Cat3 assignment manages to slide all the way into Cat1. As long as this takes no less than 180 days to occur, we're golden.[/QUOTE]
Sigh... Read the language... Up to 240 days. |
[QUOTE=tha;396377]I made the same graphs in a spreadsheet, by copying the data every now and then over the past year. The idea was to see how the new rules would lead to a stable and predictable handing out of assignments over time, after cleaning up the debris left behind by the old rules. As you can see, for DC assignments that happened in September and for first time LL tests, that will take till Cat 1 assignments reaches about 61M.[/QUOTE]
I actually read those graphs differently. To me they clearly show that the DC parameters are short, and the LL parameters are long. Ideally all lines should approximately follow each other with a constant gap. Clearly they don't. |
[QUOTE=chalsall;396378]Sigh... Read the language...
Up to 240 days.[/QUOTE] You misunderstand. I get that the assignment is recycled once "Moved into Cat1" and "240 Days Old" are met. That's what I mean when I say that they get the promised 240 days regardless of everything else. Here's an example of what I meant. Imagine throughput is so impressive that a Cat3 assignment moves into Cat1 in 1 day. That's an issue because the exponent WOULD be assigned to someone who can complete it quickly but it must be given the promised 240 days to complete, so there is a holdup. That's what we're trying to avoid by increasing the Cat3 bound. However, if the throughput is such that a Cat3 assignment moves into Cat1 after 180 days, it's not an issue because it must complete within the remaining 60 days, and that is NOT a holdup because the Cat1 time limit is 60 days anyway. 240 - 180 = 60. |
[QUOTE=TheMawn;396383]You misunderstand. I get that the assignment is recycled once "Moved into Cat1" and "240 Days Old" are met. That's what I mean when I say that they get the promised 240 days regardless of everything else.[/QUOTE]
I don't think I misunderstood. But, as always, I am happy to be proven wrong. :smile: [QUOTE=TheMawn;396383]However, if the throughput is such that a Cat3 assignment moves into Cat1 after 180 days, it's not an issue because it must complete within the remaining 60 days, and that is NOT a holdup because the Cat1 time limit is 60 days anyway.[/QUOTE] What if a Cat 3 moves into Cat 1 after less than 180 (or 240) days? This is my fundamental point. |
[QUOTE=chalsall;396384]What if a Cat 3 moves into Cat 1 after less than 180 (or 240) days?
This is my fundamental point.[/QUOTE] [QUOTE=TheMawn;396371] As long as this takes no less than 180 days to occur, we're golden. .[/QUOTE] That is my point, too. Your suggestion is that we increase the Cat3 bound so that Cat3's don't move into Cat1 too quickly, and I agree with it. I was simply suggesting what the Cat3 bound ought to be. |
[QUOTE=TheMawn;396392]That is my point, too.[/QUOTE]
Sorry. I misunderstood. |
[QUOTE=TheMawn;396383]240 - 180 = 60.[/QUOTE]
I understand your math -- and that was my first inclination. You basically are pointing out that a 180-day cat 3 sliding into cat 1 would have the same 60 days to expire as a brand new cat 1 assignment. The difference is that a poacher looking at the active assignments would see a small exponent that is 180+ days old and might be tempted to poach it. |
[QUOTE=ATH;396345]240 days for Cat 3 seems ok but why: "Must be started within 180 days". Why are they allowed 180 days before they even have to start? Wouldn't 60 or 90 days be more than enough?[/QUOTE]
Maybe. I don't remember my rationale for the 180 day number. One reason might be user machines going offline for a while. I recently left 2 machines running at our Summer home for 100 days of unattended operation without an internet connection. I had to manually extend the expiration dates, so even this case might be handled some other way. The server knows which assignments were made manually, so we can always give those assignments the full time period. Perhaps we should monitor how frequently this occurs or perhaps we shouldn't worry about it as there are generally plenty of cat 3 assignments and they aren't holding up milestones and are unattractive poaching candidates. |
[QUOTE=chalsall;396384]...
What if a Cat 3 moves into Cat 1 after less than 180 (or 240) days? ...[/QUOTE] This reminds me of the Robert Heinlein book "Time For the Stars". At the risk of spoiler alerts... In a nutshell (heavy summation), a group travels to a distant planet at sub-light speed. Of course in the time it takes them to complete a long voyage like that, the folks back on Earth develop faster-than-light travel. Taking into account relativistic effects, for the folks on Earth the first ship left decades ago, but the new FTL ship arrives at the same location in under a month. What I'm getting at is that we have all of these current systems that may bite off a particularly juicy exponent that it will take years to complete, but it would just figure that by the time it's almost done, the latest silicon can probably complete the same thing in a few days. :) It's things like that that personally make me hesitant to bother with any 100M exponents currently (although I wonder how long a 20-core machine would take to do one of those...) I'd rather focus on work that today's silicon can complete in a few months rather than a few years. Just for myself anyway I wouldn't bother setting any of my computers to work on something that took more than 12 months tops, and more like 1-2 months max (if we're talking about a single threaded test). But that's me. |
FYI: As of the time of this post there are 856 / 2860 / 31008 cat1/2/3 DCs available.
And 4114 / 5571 / 52531 cat1/2/3 first LLs available. These numbers are likely somewhat misleading as some of the recycle rules don't apply until an exponent falls into the cat 1 category. |
[QUOTE=Madpoo;396399]It's things like that that personally make me hesitant to bother with any 100M exponents currently (although I wonder how long a 20-core machine would take to do one of those...)[/QUOTE]
[URL="http://www.mersenneforum.org/showthread.php?t=20006"]http://www.mersenneforum.org/showthread.php?t=20006[/URL] [QUOTE=aurashift;396211]-I *think* the 24 core blades will finish a single 100,000,000 digit exponent in 60 days. Is that good? I can't remember if that was with 12 threads, 23, or heck, maybe I was running it on the 40.[/QUOTE] |
[QUOTE=ATH;396401][URL="http://www.mersenneforum.org/showthread.php?t=20006"]http://www.mersenneforum.org/showthread.php?t=20006[/URL][/QUOTE]
I'm actually installing a couple of new 20-core machines tomorrow (the ones I was burning in the last couple weeks). If possible I may see if I can get a good enough estimate of the timeframe for such an exponent before I put it into production use. I have about 24 hours onsite for a final burn-in which should be enough to get a good estimate. EDIT: I just ran a test on my existing machine for the basic timings... posted in that thread. |
[QUOTE=chalsall;396382]I actually read those graphs differently.
To me they clearly show that the DC parameters are short, and the LL parameters are long. Ideally all lines should approximately follow each other with a constant gap. Clearly they don't.[/QUOTE] The first time LL frontline had more debris from the old rules than DC. Given enough time, things will stabilize. When it has stabilized we can judge what the rolling averages for each Cat will be and whether or not they need to be adjusted. |
[QUOTE=tha;396429]The first time LL frontline had more debris from the old rules than DC. Given enough time, things will stabilize.[/QUOTE]
I hear what you're saying. But I would argue that the LL range had "more debris" (love that term! :smile:) mostly because of the churners. Almost certainly ~99.9% of that has already been filtered out (because the assignments have not been reported on). I think George's newly proposed ranges for the categories make a lot of sense -- trusted (read: Cat 1 and 2) DC'ers are currently facing a famine; trusted LL'ers are being given more than they can eat. |
That sound like it might be time for me to lay off the DCs and do more 1st time LLs.
That brings up the question of P-1 work. How well is that demand being served? |
[QUOTE=kladner;396497]
That brings up the question of P-1 work. How well is that demand being served?[/QUOTE] I was wondering about that myself. I've got one core of my i7-5930K working on P-1 right now with loads of memory since I'm not using the system for much right now, but it's working on 71M P-1 so I'm guessing we have a comfortable lead. |
[QUOTE=Madpoo;395954]A little more detail on that particular bad computer...
- Of the 109 bad results, 56 of those had a zero for the error code. - Of the 43 non-bad results: - 4 are still unverified, awaiting a double-check - 17 are verified okay (double-check matched) - 21 are suspect - some error code, but they haven't been double-checked yet - 1 had a factor found later, so who the heck knows, but there were 2 LL mismatched LL tests done...I have my guess which one was bad :smile: [/QUOTE] An update... I've been running checks on the unverified/unassigned things and as expected the original results are indeed turning out to be bad (well, I think one checked out okay). There was also another user with a very bad computer that returned a bunch of results... I'm checking those as well and so far I think I'm at 2 for 2 were bad in that original run. |
[QUOTE=TheMawn;396506]I was wondering about that myself. I've got one core of my i7-5930K working on P-1 right now with loads of memory since I'm not using the system for much right now, but it's working on 71M P-1 so I'm guessing we have a comfortable lead.[/QUOTE]
Actually, things are a bit "tight" for P-1'ing. I've got one of my spiders watching the Cat 4 range (for both LL'ing and P-1'ing). If it sees anything about to be handed out for P-1'ing at less than 74 it "pulls it's rip-cord" to release some at 74. At the same time, if anything is about to be handed out for LL'ing at less than 75 (usually a completed P-1 at 74) it grabs it for processing. This is why "Let GPU72 Decide" is sometimes assigning 66M (Cat 3) to 75, and at other times >71M to 75. |
[QUOTE=Madpoo;396520]An update... I've been running checks on the unverified/unassigned things and as expected the original results are indeed turning out to be bad (well, I think one checked out okay).
There was also another user with a very bad computer that returned a bunch of results... I'm checking those as well and so far I think I'm at 2 for 2 were bad in that original run.[/QUOTE] Good job. Thank you for doing that. |
[QUOTE=TObject;396586]Good job. Thank you for doing that.[/QUOTE]
Yeah, it's kind of fun finding the bad results. Just got another one... I think I'm at 6 out of 7 now (where the first run by someone was bad). [URL="http://www.mersenne.org/report_exponent/?exp_lo=45133499&full=1"]http://www.mersenne.org/report_exponent/?exp_lo=45133499&full=1[/URL] I was meaning to do some kind of analysis and see how many exponents in an average 1M range of exponents ended up being bad. Someone may have already done it, but I should be able to poke the DB and tease that info out of it. For what I'm doing, I found something like a couple dozen exponents that were highly suspicious over a range of 44M-60M+, so I know that's only a drop in the overall bucket of how many will actually end up being bad, but I wonder by how much. I'm looking at machines that have done "XX" LL tests where "YY" percent ended up being bad. That kind of limits it to older systems that have done work that have already been assigned as double-checks, but quite a few of them also did stuff higher up that haven't been DC'd yet, or even assigned as DC's (I'm only doing unassigned exponents). It's probably not crazy to think that even if a computer has returned a single bad result, anything else coming out of it has a much higher chance of also being bad. Maybe even to the point where if a machine has one bad entry, it might be worth assigning all of it's exponents as double-checks right away. After all, in my analysis, I'm looking for computers with a certain bad percent out of it's total results. But even if it has a low percentage like 10% bad, that may only reflect the fact that not many have been DC'd yet...the final percent of bad results could be near 100% by the time they're all checked. The worst offender I'm just about done working on had something like 70% + bad results, but of the 5 or so I'm checking, I think all but one was bad which probably puts that computer closer to a 90% failure rate when all's said and done. |
[QUOTE=Madpoo;396642]The worst offender I'm just about done working on had something like 70% + bad results, but of the 5 or so I'm checking, I think all but one was bad which probably puts that computer closer to a 90% failure rate when all's said and done.[/QUOTE]Hmm, just thinking now, that perhaps I could just not bother with those pesky time and power consuming LLs and DCs. Perhaps instead I can just submit random residues for exponents and let Madpoo (and whomever else is doing such things) run the real tests. AFICT I still get the credit. And the only price to pay is getting flagged as unreliable. Moar credits! :evil:
[size=1][color=grey]Perhaps we should remove the credits that were given for results shown to be incorrect? It might encourage people to fix their machines and stop wasting their electricity and time for no return.[/color][/size] |
[QUOTE=retina;396643][size=1][color=grey]Perhaps we should remove the credits that were given for results shown to be incorrect? It might encourage people to fix their machines and stop wasting their electricity and time for no return.[/color][/size][/QUOTE]
Not a bad idea. |
[QUOTE=retina;396643]Hmm, just thinking now, that perhaps I could just not bother with those pesky time and power consuming LLs and DCs. Perhaps instead I can just submit random residues for exponents and let Madpoo (and whomever else is doing such things) run the real tests. AFICT I still get the credit. And the only price to pay is getting flagged as unreliable. Moar credits! :evil:
[size=1][color=grey]Perhaps we should remove the credits that were given for results shown to be incorrect? It might encourage people to fix their machines and stop wasting their electricity and time for no return.[/color][/size][/QUOTE] Of course if you really want more credits, just type up and submit a lot of trial factoring results, say 1000 exponents taken to say 90 bits, and no need to even make up a pesky residue :w00t: |
[QUOTE=Gordon;396670]Of course if you really want more credits, just type up and submit a lot of trial factoring results, say 1000 exponents taken to say 90 bits, and no need to even make up a pesky residue :w00t:[/QUOTE]Factors are checked by the server but residues aren't (and can't be).
|
[QUOTE=retina;396691]Factors are checked by the server but residues aren't (and can't be).[/QUOTE]
"no factor to xxx" cannot be verified by the server, see. |
[QUOTE=VBCurtis;396693]"no factor to xxx" cannot be verified by the server, see.[/QUOTE]Oh, okay. I understand.
|
[QUOTE=retina;396643]Hmm, just thinking now, that perhaps I could just not bother with those pesky time and power consuming LLs and DCs. Perhaps instead I can just submit random residues for exponents and let Madpoo (and whomever else is doing such things) run the real tests. AFICT I still get the credit. And the only price to pay is getting flagged as unreliable. Moar credits! :evil:[/QUOTE]
Man... you try and do something nice, and then... See, this is why we can't have nice things. :smile: As for your suggestion re: removing credit for results found to be bad... that would remove the incentive to cheat I guess. The checks I'm doing are still a long way out from where the double-checks are being assigned, so even though I'm pretty confident my residues are correct and the originals are the bad one, it'll be some time before we know for sure. That just means that some determine cheater (and George has his ways of looking for them) would only get the glory of their stolen valor for a bit longer until we prove their malfeasance. I have this sense of deja vu, like this idea may have come up before... I kind of remember reading some discussion about credits for bad results. Don't remember what came of it though. |
[QUOTE=Madpoo;396701]As for your suggestion re: removing credit for results found to be bad... that would remove the incentive to cheat I guess.[/QUOTE]Not just cheating, but people with bad machines. We shouldn't be incentivising people submitting useless work, no matter the intent, whether it is deliberate or not. Naturally we don't go calling everyone a cheat who submits a bad result, we just assume it is a cosmic ray, or a poor PSU, or whatever.
|
That's why the Prime95 binaries are shipped with hidden security code. Only manual results can be spoofed, and too many would be caught. (Having said that, you can still easily submit a *lot* of false results through the manual submission form.)
|
[QUOTE=VBCurtis;396693]"no factor to xxx" cannot be verified by the server, see.[/QUOTE]
Indeed :tu: A quick check at mersenne.ca Factoring credit for M997755331 from 72 to 100 bits is 257,339,023 days, yes 257 MILLION days Want to prove it wrong? Try finding a factor.... :razz: |
[QUOTE=chalsall;396459]I think George's newly proposed ranges for the categories make a lot of sense -- trusted (read: Cat 1 and 2) DC'ers are currently facing a famine; trusted LL'ers are being given more than they can eat.[/QUOTE]
I made changes as discussed. Look for new crossovers within the next 24 hours. Please monitor the situation over the coming months to see if more changes are necessary. |
[QUOTE=Prime95;396838]I made changes as discussed. Look for new crossovers within the next 24 hours. Please monitor the situation over the coming months to see if more changes are necessary.[/QUOTE]
Thanks George! |
[QUOTE=Batalov;396209]Maybe it's a good thing, because at least the error code is clearly visible, which for some reason [URL="http://www.mersenne.org/report_ll/?exp_lo=79299719"]was not recorded[/URL] to the database.[/QUOTE]
Just so you know Gordon, your result did not match mine. 00000000 error code. Not really surprising considering how long you said you took for the run, probably without ECC memory. |
Whimsically....
I will brazenly predict that we will have all exponents > 30,000,000 TF'd to 70 bits by 2025
|
[QUOTE=petrw1;397070]I will brazenly predict that we will have all exponents > 30,000,000 TF'd to 70 bits by 2025[/QUOTE]
Sure? Infinity is an awfully big number :smile: |
[QUOTE=petrw1;397070]I will brazenly predict that we will have all exponents > 30,000,000 TF'd to 70 bits by 2025[/QUOTE]
I dunno that's a tremendously huge number of [URL="http://www.boostclassifieds.com.au/database/carsforsale/files/1976_Toyota_Corona_Mk_Ii_8654794.jpg"]1976 Toyota Corona years[/URL]. |
[QUOTE=Gordon;397073]Sure? Infinity is an awfully big number :smile:[/QUOTE]
Well except that I also brazenly suggest infinity is prime so no factoring is required for it. :loco: Or maybe we could stop at 999,999,999 .... until then anyway. HAHA |
My attempt to estimate all to 70 bits
My rough math tells me it will take about 20 Million GhzDays to TF all remaining exponents above 30,000,000 to 70 Bits
|
[QUOTE=petrw1;397085]My rough math tells me it will take about 20 Million GhzDays to TF all remaining exponents above 30,000,000 to 70 Bits[/QUOTE]
My 1 and only GPU could do this in just over 100 years....I am giving the collective 10 years...and that's not even taking into account Moore's Law. Mind you neither is it considering that the vast majority of the GPUs are working on the GPU72 project....going much deeper than 70 bits on a relatively small range (the leading edges). |
It would be possible to really get *all* exponents factored up to 2^70 - I can tell you that every number bigger than 2^p-1 for p>2^69 really has no factor below 2^70. But since mersenne.ca only goes to Exponents up to 2^32 I think that is a little too far off.
|
Well, I am planning on adding a GPU or two in the coming months. I'll see what looks good when it comes out. 1080p monitors are going somewhat out of style and I'm debating between 1440p 144hz and 4K. My space isn't suited for two or three monitors (I can't wait...) but that's a consideration, too.
|
What??? Are GPUs supposed to be connected to monitors?
How come? I thought they were just factoring machines... :whistle: |
[QUOTE=manfred4;397111]It would be possible to really get *all* exponents factored up to 2^70 - I can tell you that every number bigger than 2^p-1 for p>2^69 really has no factor below 2^70. But since mersenne.ca only goes to Exponents up to 2^32 I think that is a little too far off.[/QUOTE]
I "implied" or intended to imply without actually stating that I am referring to *all* exponents on PrimeNet (<= 999,999,999) |
[QUOTE=TheMawn;397112]Well, I am planning on adding a GPU or two in the coming months. I'll see what looks good when it comes out. 1080p monitors are going somewhat out of style and I'm debating between 1440p 144hz and 4K. My space isn't suited for two or three monitors (I can't wait...) but that's a consideration, too.[/QUOTE]
I've been using a [url=http://www.cnet.com/products/hp-zr2740w/]27" 1440p[/url] monitor for about two years at work and at home. At work, it's great, but I think 4K would be better so I could make the fonts smaller and see more (I could easily read 4 pt font at 2 feet if my monitor were capable of legibly reproducing it). I've found having a single 1440p monitor is vastly better than dual 1080's, because the vertical resolution is useful for code and documents, as is horizontal resolution for looking at spreadsheets and databases with many columns without monitor bevel in between. I'd go with a high quality 4k monitor now, but 1440p is excellent for work. At home, I've found the 1440p format somewhat awkward. If you're watching a 1080p video, it doesn't leave much useful space for doing much else. If you maximize the video, the scaling isn't as simple as pixel doubling and looks odd. If you consume a lot of video, I would highly recommend going 4k at home and skipping 1440p. I'm not really a gamer so I can't comment on that. |
The 1080p scaling is something I had not thought about. I'll have to give it some thought and see what the market is up to when I get there.
1440p offers a high refresh rate which would be ideal for "competitive" gaming, where 4K is unable to support 120hz for the time being. 4K offers better real estate and therefore is better for productivity and non-competitive games (the likes of Skyrim). I have always liked the idea of multiple monitors though I have only occasionally had the chance to enjoy it. Some part of me really wants the monitors to be the same but I suppose if my financial situation allowed it, I could have one of each? Move things around as desired. On the other hand, my CDO (the letters should be in alphabetical order, damnit) would probably prefer two identical monitors. Decisions... |
(Offtopic)
Why the quotes around competitive? |
Have you looked at the Asus PG27AQ screen? It might be what you're looking for, but it's not out just yet. This may be one of those rare times where it's actually worth waiting a short time.
|
[QUOTE=TheMawn;397140]I have always liked the idea of multiple monitors though I have only occasionally had the chance to enjoy it.[/QUOTE]
I have been using multiple monitors for at least 15 years, starting at 1280 by 1024 times 2. I'm now at 1920 by 1080 times 3. It can get addictive; I truly feel handicapped in front of only one screen (even with 36 virtual desktops). A couple of years ago I had planned on building my "dream workstation" with 3 4K displays. Unfortunately my laptop (my main workstation) died unexpectedly, so I had to quickly build a new (tower) machine from locally available components (thus the HD monitors). Might work out for the best; let the 4K offerings improve while dropping in price.... :smile: |
This one? [url]http://rog.asus.com/393642015/gaming-monitors/ces-2015-rog-swift-pg27aq-27-inch-4k-lcd-with-gsync/[/url]
I have in fact been looking at it. Not surprisingly, the 144hz 1440p one is the ASUS one also, mentioned on that page. Also not surprisingly is that I've come to the same conclusions they have about which is better for which type of game. By "competitive" I meant a game in which you're competing real-time against other people (not that I play at a competitive level) like a Real-Time Strategy game. The faster frame rate seems appealing in that kind of scenario. The fact that they're both 27 inches makes having one of each seem like less of a hassle. |
And I thought I was doing well having a new 1920 by 1280 screen (27 inch). I also have a 1280 by 1024(19 inch). These serve me fine currently.
|
[QUOTE=petrw1;397085]My rough math tells me it will take about 20 Million GhzDays to TF all remaining exponents above 30,000,000 to 70 Bits[/QUOTE]
If GIMPS were to dedicate its entire resources to this goal, it could be attained in a mere six months. We're presently producing roughly 39 million GHz-days/year across all work types. |
All exponents below [B][COLOR="Orange"]33,565,373[/COLOR][/B] have been tested and double-checked.
All exponents below [B][COLOR="Blue"]54,357,769[/COLOR][/B] have been tested at least once. Countdown to testing all exponents below M([B][COLOR="Blue"]57885161[/COLOR][/B]) once: 2,624 Countdown to first time checking all exponents below 56M: [B][COLOR="Red"]9[/COLOR][/B] (Estimated completion : [COLOR="Green"]2015-06-30[/COLOR]) Countdown to double checking all exponents below 34M: [B][COLOR="Red"]99[/COLOR][/B] (Estimated completion : [COLOR="Green"]2015-05-02[/COLOR]) Countdown to proving M([COLOR="Green"]37156667[/COLOR]) is the [COLOR="green"]45[/COLOR]th Mersenne Prime: 46,412 The estimated "completion date" for the classic 79.3M range has been hovering around Jan-Feb 2018 for almost 5 months. Going back to 2013 it has been in the same range +- 2.5 months. |
[QUOTE=Uncwilly;397838]All exponents below [B][COLOR="Orange"]33,565,373[/COLOR][/B] have been tested and double-checked.
All exponents below [B][COLOR="Blue"]54,357,769[/COLOR][/B] have been tested at least once. Countdown to testing all exponents below M([B][COLOR="Blue"]57885161[/COLOR][/B]) once: 2,624 Countdown to first time checking all exponents below 56M: [B][COLOR="Red"]9[/COLOR][/B] (Estimated completion : [COLOR="Green"]2015-06-30[/COLOR]) Countdown to double checking all exponents below 34M: [B][COLOR="Red"]99[/COLOR][/B] (Estimated completion : [COLOR="Green"]2015-05-02[/COLOR]) Countdown to proving M([COLOR="Green"]37156667[/COLOR]) is the [COLOR="green"]45[/COLOR]th Mersenne Prime: 46,412 The estimated "completion date" for the classic 79.3M range has been hovering around Jan-Feb 2018 for almost 5 months. Going back to 2013 it has been in the same range +- 2.5 months.[/QUOTE] LOL... I was just thinking, all it really takes is for one grandfathered exponent, like one of them in the 54M range, to hold things up for decades. :) Under the rules as I think I understand, there's a bit of grace involved if the exponent is close to completion. One of those for instance is something like 96-97% complete but only progressing 0.1% every few days. Each time it checks in, which to it's credit it does regularly, it sets the estimated completion as just a couple days away but that's patently false. :) If we imagine that it progressed at a glacial 0.01% every week, but kept checking in, it wouldn't be expired and it might take another few years to finish. In this extreme fictional example, if it still has 3.2% to go and only moves at 0.01% per week, it would take an additional 6+ years, and it wouldn't expire because the grace rules think it's close enough that it won't expire it. In reality it progresses more like 0.1 - 0.2% every week so it'll "only" be another 4-8 months give or take, not the estimated 4 days the client reports in each time. Those other 2 <54M grandfathered assignments are just 63-64% done and are moving similarly slow so those might take years to finish still. Good news is that there are only so many grandfathered assignments left, and if you look at the <56M list, that's all of the grandfathered first time checks...just those 9. There are another 7 grandfathered DC assignments. One is in the 34M range and the rest are 35M-38M so they're not impeding any imminent milestone completions. |
A real example...please don't poach just because I noted this....
I've been monitoring 4 of them in 55M range (belonging to "dannytoearth") for a few months now.
They are all now 67-68% and making regular daily progress. They were at 53% at the start of the year. They are progressing at just over 0.2% per day. So even though the page [url]http://www.mersenne.org/assignments/?exp_lo=33000000&exp_hi=56000000&execm=1&exdchk=1&exp1=1&extf=1&B1=Get+Assignments[/url] says 110 days to go it will more likely be (100 - 68) / 0.2 = 160 days....mid - late August BUT AGAIN I believe they will finish. |
[QUOTE=Madpoo;397927]In reality it progresses more like 0.1 - 0.2% every week so it'll "only" be another 4-8 months give or take, not the estimated 4 days the client reports in each time. Those other 2 <54M grandfathered assignments are just 63-64% done and are moving similarly slow so those might take years to finish still.[/QUOTE]
Those 2 at 63-64% will expire mid May 2015 unless progress is made: [URL="http://mersenneforum.org/showpost.php?p=396149&postcount=1715"]post #1715[/URL] Not counting the grace period when very close to finished, we will be done with all grandfathered exponent in Dec 2015 at the latest: [URL="http://mersenneforum.org/showpost.php?p=396196&postcount=1724"]post #1724[/URL] |
[QUOTE=petrw1;397928]I've been monitoring 4 of them in 55M range (belonging to "dannytoearth") for a few months now.
They are all now 67-68% and making regular daily progress. They were at 53% at the start of the year. They are progressing at just over 0.2% per day. So even though the page [url]http://www.mersenne.org/assignments/?exp_lo=33000000&exp_hi=56000000&execm=1&exdchk=1&exp1=1&extf=1&B1=Get+Assignments[/url] says 110 days to go it will more likely be (100 - 68) / 0.2 = 160 days....mid - late August BUT AGAIN I believe they will finish.[/QUOTE] Well, one of these four has been poached. dannytoearth now has 55107499 assigned as a DC. [QUOTE]Countdown to first time checking all exponents below 56M: 8 (Estimated completion : 2015-07-06)[/QUOTE] |
[QUOTE=cuBerBruce;398145]Well, one of these four has been poached. dannytoearth now has 55107499 assigned as a DC.[/QUOTE]
Wasn't me this time at least. :smile: I still have those 3 in the 54M range already tested, ready to check in as a DC as soon as they check in the for the first time. Spoiler alert, none of them were prime. |
[QUOTE=Uncwilly;393623]
Just as a note: we are now at 0.500 expected new primes in the 79.3 range.[/QUOTE] Current expected new primes in the 79.3M range: 0.467. Change since January 26, 2015 (63 days ago): 0.500 - 0.467 = 0.033. Change since January 1, 2014 (453 days ago): 0.705 - 0.467 = 0.238. Expected time to next prime in the 79.3M range is therefore estimated by: 63 days/0.033 primes = 1,909 days [B](from today)[/B], or June 20, 2020 [B]OR[/B] 453 days/0.238 primes = 1,903 days [B](from today)[/B], or June 14, 2020. Seems that our throughput has been remarkably stable over the last fifteen months or so. :smile: |
1 Attachment(s)
[QUOTE=NBtarheel_33;398950]Expected time to next prime in the 79.3M range is therefore estimated by:
63 days/0.033 primes = 1,909 days [B](from today)[/B], or June 20, 2020 [B]OR[/B] 453 days/0.238 primes = 1,903 days [B](from today)[/B], or June 14, 2020. Seems that our throughput has been remarkably stable over the last fifteen months or so. :smile:[/QUOTE] But if you will note: [QUOTE=Uncwilly;397838]The estimated "completion date" for the classic 79.3M range has been hovering around Jan-Feb 2018 for almost 5 months. Going back to 2013 it has been in the same range +- 2.5 months.[/QUOTE] That is based upon the change in P90 years remaining and the rate of change (based upon a floating period of ~8-13 weeks). |
[QUOTE=ATH;396149]The one at 96.2% will expire ~ June 4th 2015 + 3.33 days for every extra percent done "plus a grace period if close to finished" (unspecified grace period).
The one at 64.1% will expire ~ May 15th 2015 + 3.33 days for every extra percent done The one at 63.4% will expire ~ May 13th 2015 + 3.33 days for every extra percent done From what you said they are doing less than 0.3% per day (1% in 3.33 days), so they will eventually expire, except the one at 96.2% might just make it if it does 0.1% every 3 days and depending on the grace period.[/QUOTE] Revisiting this... I was curious what the grandfather rules actually look like, as the server itself applies it when expiring assignments. I think George posted this elsewhere but anyway, the basic thing for first-time LL checks boils down to: [LIST][*]Grace period only applies to assignments prior to 2014-03-01[*]Only applies to assignments in the critical range (as of this message it's anything below 58629120)[*]Grandfathered assignments higher than the critical threshold get a free pass...of course that threshold is always increasing over time[*]Calculates a "grace" percent done threshold[*]If the current percent done is below the "grace" threshold *and* it's below that critical threshold, it recycles that assignment[/LIST] Here's the part I didn't really look at, and that's how the "grace percentage" is calculated. It's somewhat straightforward once I stared at it a while:[LIST][*]There's a 10% deduction in % done right off the top[*]You get one "free" year (365 days)[*]Anything past the first 365 days, it expects 1% progress per day[/LIST] The way it calculates, it takes the difference between right now and the date it was assigned, and then subtracts that 365 days. For example, exponent 54357769 was assigned "2013-08-21 02:17:38.490" which was 595 days ago, or 230 days once we grant that one year bonus. Take 230 days and divide by 3.33 and it would expect you to be at least 69.1% done by then, but add another 10% to that for an expectation that after a grand total of 595 days, you *should* be at 79.1% complete. That assignment is, in fact, 97.7% done so yay, it gets a reprieve. But pretend for a moment that it does still crawl along at mind-numbingly slow speeds. At some point the "grace percentage" actually goes over 100% so even if it were at 99.9% done it would expire. That will happen when an exponent is over 665 days old: (665-365)/3.33 + 10 = 100.1. When that happens (as of today, that's any assignment earlier than June 12, 2013), as long as it's below that critical threshold it will expire. In the example of M54357769 that will happen in 70 more days from today. There are currently 14 first-time, grandfathered assignments in that critical range. It's hard to say if any of them are in particular danger of expiring since I don't know how fast they're actually progressing. The closest is M58403539 which is just 7.28% ahead of the cut-off. Grand total, there are 3,294 grandfathered first-time checks. 2,622 of them are above 100,000,000 which are well ahead of the critical area and will be safe for some time to come. Most of the other 672 are in the 60-70M range with only 104 below 60M. So... that makes me feel better, knowing some of the slower systems out there can't actually hold things up indefinitely. They will eventually expire. |
[QUOTE=Madpoo;399683]it expects 1% progress per day
... Take 230 days and divide by 3.33[/QUOTE]To me that says 0.3% per day.[QUOTE=Madpoo;399683]ou *should* be at 79.1% complete In the example of M54357769 that will happen in 70 more days from today.[/QUOTE]And again to me that says 0.3% per day. |
[QUOTE=retina;399694]To me that says 0.3% per day....And again to me that says 0.3% per day.[/QUOTE]
Lack of sleep. Yes, carry the one, borrow a seven, multiply by pi and divide by zero. You'd think I didn't know a dividend from a hole in the ground. Okay: (# of days / 3.33) implies it's expecting 0.3% daily. Anyway, besides that, the rest of the math should be correct. If a grandfathered exponent reaches the ripe old age of 665 days old, it will expire no matter what (assuming it's below the "critical" threshold, which has it's own algorithm). That's more generous than I probably would be with exponents in that critical range. :smile: |
[QUOTE=Madpoo;399702]Okay: (# of days / 3.33) implies it's expecting 0.3% daily.[/QUOTE]
Incidentally, I'm trying to think of something that the server could be doing to not only track the current % done, but also gather some kind of velocity. It may be somewhat basic to keep it simple, like just doing a delta of the last check-in and the current one. Anything fancier that tracks it more thoroughly would give better results but also be a database burden and be kind of a chore. Basically, the database is big enough as-is without trying to keep track of the % done and timestamps for every time a client updates itself. It would be interesting though to have some basic stats, because the current ETA based on the client's "best guess" can be wildly optimistic, to say the least. It's not high on my to-do list but maybe when I have some spare time I can noodle around with some ideas... if anyone has any thoughts on that, let me know. |
[QUOTE=Madpoo;399703]Incidentally, I'm trying to think of something that the server could be doing to not only track the current % done, but also gather some kind of velocity. It may be somewhat basic to keep it simple, like just doing a delta of the last check-in and the current one. Anything fancier that tracks it more thoroughly would give better results but also be a database burden and be kind of a chore.[/QUOTE]This is one of those things that appears simple at first and turns out to be fraught with so many problems and assumptions that the results are only good for a small percentage of tasks. People turn of their computers at night, at weekends, when they are on [strike]vacation[/strike] holiday, when they are out doing fieldwork, etc. People also do computational intensive things at irregular times for irregular periods that prevent lower priority tasks running. Computers get hot and throttle, or crash. Power goes out. And probably a million other things that affect the throughput. I can see it becoming yet another ignored piece of data along with the existing ETA values.
|
[QUOTE=Madpoo;399703]Basically, the database is big enough as-is without trying to keep track of the % done and timestamps for every time a client updates itself.[/QUOTE]
So only do it for exponents of interest, say the lowest 100 (or earliest 100) active assignments (LL / DC). Dump them into a separate table, and do a linear regression to get more reliable ETAs. Once the assignment is over, clean out the table. |
[QUOTE=axn;399712]So only do it for exponents of interest, say the lowest 100 (or earliest 100) active assignments (LL / DC). Dump them into a separate table, and do a linear regression to get more reliable ETAs. Once the assignment is over, clean out the table.[/QUOTE]
Yeah, I'm kind of leaning towards something like that. Reason being, I don't want to do anything that would require changes on the way these check-ins are handled by the server. As in, I don't want to mess with any current functionality and risk breaking whatever. My best guess at an approach right now would be to setup a whole new table in the DB that run some scheduled job that takes the current dates and progress and stores them. Then the website could examine that in whatever way it wants to get some idea of the real progress going on. To your point, it could indeed be relegated to just the stuff that might show up in whatever milestone reports we're interested in at the time. If that were the case, it could track maybe a couple weeks worth of check-ins for a subset of work and it wouldn't be too bad to manage. Well, it's a thought for sure. First step is collecting that data which really isn't too hard. Next would be doing something with it. :smile: |
[QUOTE=Madpoo;399703]Incidentally, I'm trying to think of something that the server could be doing to not only track the current % done, but also gather some kind of velocity. It may be somewhat basic to keep it simple, like just doing a delta of the last check-in and the current one. Anything fancier that tracks it more thoroughly would give better results but also be a database burden and be kind of a chore.[/QUOTE]
This is what I suggested a while back except I suggested the average over last 3 checkin to better average out peoples irregular behaviour: [URL="http://www.mersenneforum.org/showpost.php?p=388111&postcount=1548"]http://www.mersenneforum.org/showpost.php?p=388111&postcount=1548[/URL] I actually meant the 4th last checkin in that formula so we get 3 gaps between 4 checkins, but 3 is just a suggestion maybe another number of gaps is better. |
[QUOTE=Madpoo;399702]Lack of sleep. Yes, carry the one, borrow a seven, multiply by pi and divide by zero. You'd think I didn't know a dividend from a hole in the ground.
Okay: (# of days / 3.33) implies it's expecting 0.3% daily. Anyway, besides that, the rest of the math should be correct. If a grandfathered exponent reaches the ripe old age of 665 days old, it will expire no matter what (assuming it's below the "critical" threshold, which has it's own algorithm). That's more generous than I probably would be with exponents in that critical range. :smile:[/QUOTE] And while I'm at it... the double-check grandfather rules are similar. Whereas first-time checks assume the work will be at least 10% done after the first year, double-checks assume they'll be at least 60% done in the first year. First time assumes 0.3% progress daily, and double-checks assume 0.333% daily. What it boils down to for double-checks is that it will expire no matter what, even at 99.99%, once the exponent reaches an age of 485 days (~ 1 year+4 months, compared to ~ 1 year+10 months for first time checks). Just like first time grandfathered assignments, this only applies to work in the critical range. Right now that means exponents below 34505378. As it turns out, there aren't any grandfathered DC assignments below that anyway. There are just 69 of them right now...[LIST][*]6 in the 35-38M range[*]16 between 40M-50M[*]40 between 50M-60M[*]3 between 60M-70M[*]and then this motley bunch to round it out: 85000043,100000237,100000379,100000609,112401617[/LIST] I can say that of the 4 grandfathered assignments in the 35M range, once the critical threshold reaches them, they would all get expired right away. Some of them aren't even really close, like 16-20%. I mean, *maybe* by the time 35040547 is in the critical area it will have moved past the expiration threshold, but I guess we'll see. As of today, that threshold is 78% for that exponent, and it's only 16.4% done. I'm not sure when to expect that one to be in the critical range based on the current progress, but it has some catching up to do. |
[QUOTE=Madpoo;399755]I'm not sure when to expect that one to be in the critical range based on the current progress, but it has some catching up to do.[/QUOTE]
Approximately 130 days from now based on our ~67.7 Cat 1 completions a day average over the last 30 days. |
[QUOTE=ATH;399735]This is what I suggested a while back except I suggested the average over last 3 checkin to better average out peoples irregular behaviour:
[URL="http://www.mersenneforum.org/showpost.php?p=388111&postcount=1548"]http://www.mersenneforum.org/showpost.php?p=388111&postcount=1548[/URL] I actually meant the 4th last checkin in that formula so we get 3 gaps between 4 checkins, but 3 is just a suggestion maybe another number of gaps is better.[/QUOTE] My first pass at this is as follows:[LIST][*]Once a night (just past midnight, UTC) I take a snapshot of the progress of all current assignments, if they've checked in during the past 24 hours. I may adjust that to exclude work with 0% complete since it ends up being kind of meaningless.[*]That info goes into a new database I setup that clears entries older than 30 days or when the assignment is removed.[/LIST] Currently, I don't have a ton of data to work with so far... to work at all I need at least 2 check-ins so I have a reference point and then a point of comparison. As I noted, some of the entries have 0% as the work complete, and that may be from an assignment that hasn't actually started yet or was just assigned. Including work that hasn't yet begun is only going to throw off the progress so I'll probably remove those entries and keep them from being added in the future. I've done some basic analysis of a few interesting assignments. For example, 55771997 and 55861261 since those are in that group of first-time checks below 56M awaiting completion. Even though I may be capturing multiple data points along the way, if they check in several times over 30 days, my initial go at this is just taking the first and last dates and comparing the % complete recorded. In those 2 cases, we're only talking about 48 hour intervals so it may not be totally accurate yet, but it should get better as more data is recorded. Anyway, to the nitty gritty: Each of those has recorded a rate of progress of 0.05% every 24 hours. 55771997 still has 13.8% to go so it's "real ETA" is more like Jan 15, 2016. 55861261 is moving at the same rate but since it only has 1.3% to go it should complete on May 10, 2015. Since my previous look at things shows that even grandfathered first time checks will expire regardless when it's over 665 days old, unless that machine picks up the pace, it will expire before completion. Someone with more time than me could probably see where the lines intersect and pick the exact day that would happen, assuming it continues at a linear rate. :smile: At least the second one will probably finish up soon. As I mentioned, this is preliminary since those systems have only checked in twice since I started gathering data, and the 0.1% progress it made in 2 days between check-ins may go up or down as it averages out. As I look at some other assignments out there, there are some "outstanding" ones, and I mean that in a bad way. :) Worst case is 36292681 which has progressed from 0.2% to 0.3% in about 34 and 1/2 days. At that rate I'm projecting it'll finish in the year 2109. Eventually I hope I can do a follow up analysis and see if my predicted ETAs match the actual time it was finished, but since a client may complete it's assignment and then check in it's results much later (either manually or because it's not connected to the 'net 24/7), I don't know if I'd worry about that too much. I already see a few cases where my amazing prediction machine is saying some assignments *should* have been finished yesterday but they're still not done, so I'm not sure if that's just because the client hasn't bothered sending it in yet or because my data set isn't really that accurate yet, etc. |
Early results of my "real ETA" attempt
I'm looking specifically at the first time tests between 54M and 56M (just those last 8) that we're waiting for.
One of them hasn't checked in again since I started gathering stats, so I have nothing to work on with it. The other 7 in that range have checked in at least twice which gave me enough of a running "throughput" to make the following best guesses: [CODE] Exponent "Real" ETA -------- ---------- 54674791 2016-03-17 20:18:11.823 54759797 2016-03-26 00:59:11.540 55027163 2015-10-26 11:08:38.387 55059383 2015-10-05 07:10:38.590 55079077 2015-10-28 16:12:38.790 55771997 2016-10-19 13:24:44.550 55861261 2015-06-07 11:08:45.013 [/CODE] That's all based on their percentage to go and their observed "percent per day" rate. |
[QUOTE=Madpoo;400407]I'm looking specifically at the first time tests between 54M and 56M (just those last 8) that we're waiting for.
One of them hasn't checked in again since I started gathering stats, so I have nothing to work on with it. The other 7 in that range have checked in at least twice which gave me enough of a running "throughput" to make the following best guesses: [CODE] Exponent "Real" ETA -------- ---------- 54674791 2016-03-17 20:18:11.823 54759797 2016-03-26 00:59:11.540 55027163 2015-10-26 11:08:38.387 55059383 2015-10-05 07:10:38.590 55079077 2015-10-28 16:12:38.790 55771997 2016-10-19 13:24:44.550 55861261 2015-06-07 11:08:45.013 [/CODE] That's all based on their percentage to go and their observed "percent per day" rate.[/QUOTE] October 16? Doubt that will survive, what chance somebody "accidentally" types that exponent into a manual check.... |
[CODE]
Exponent "Real" ETA [B]665 days old on[/B] -------- ---------- [B]---------------[/B] 54674791 2016-03-17 20:18:11.823 [B]2015-09-11[/B] 54759797 2016-03-26 00:59:11.540 [B]2015-09-11[/B] 55027163 2015-10-26 11:08:38.387 [B]2015-09-05[/B] 55059383 2015-10-05 07:10:38.590 [B]2015-09-05[/B] 55079077 2015-10-28 16:12:38.790 [B]2015-09-05[/B] 55771997 2016-10-19 13:24:44.550 [B]2015-10-12[/B] 55861261 2015-06-07 11:08:45.013 [B]2015-10-12[/B] [/CODE] |
[QUOTE=Gordon;400433]October 16?
Doubt that will survive, what chance somebody "accidentally" types that exponent into a manual check....[/QUOTE] Well, I know there are some out there (and I guess I'm one of them) who don't mind poaching an assignment that seems abandoned (and yes, I've been wrong before because I wasn't paying attention). It wouldn't be much of a stretch to say poaching one that is just going to expire anyway before completion, even though it's being slowly worked on (as little as 0.1% in 2 weeks in some cases), is probably okay too. And now I should probably duck and hide since not all would agree. LOL |
I have poached many exponents in my time so I'm not one to talk, but maybe lets try and see if the recycling system actually works and not poach them just before?
As I wrote 2-3 times in this thread the 2x 54M exponents will "expire" in mid/end of May 2015, so only ~ 1 month to go. |
[QUOTE=ATH;400469]I have poached many exponents in my time so I'm not one to talk, but maybe lets try and see if the recycling system actually works and not poach them just before?
As I wrote 2-3 times in this thread the 2x 54M exponents will "expire" in mid/end of May 2015, so only ~ 1 month to go.[/QUOTE] I was referring to the October 2016 ones... |
[QUOTE=Gordon;400494]I was referring to the October 2016 ones...[/QUOTE]
The October 2016 dates was the ETA on them finishing [I]if left alone[/I], but they will not be left alone they will be recycled way before that, the 5 lowest of 7 exponents should be recycled within 2 months: [CODE]Exponent "Real" ETA 665 days old Recycled on -------- ---------- ------------ ----------- 54674791 2016-03-17 2015-09-11 2015-05-25 + 3.33 days for every % above 67.00% 54759797 2016-03-26 2015-09-11 2015-05-22 + 3.33 days for every % above 66.20% 55027163 2015-10-26 2015-09-05 2015-06-11 + 3.33 days for every % above 73.90% 55059383 2015-10-05 2015-09-05 2015-06-08 + 3.33 days for every % above 72.90% 55079077 2015-10-28 2015-09-05 2015-06-10 + 3.33 days for every % above 73.60% 55771997 2016-10-19 2015-10-12 2015-08-27 + 3.33 days for every % above 86.20% 55861261 2015-06-07 2015-10-12 2015-10-07 + 3.33 days for every % above 98.70%[/CODE] The last one at 98.70% I'm not sure of. According to the code George posted: [URL="http://www.mersenneforum.org/showpost.php?p=387555&postcount=1471"]post #1471[/URL] there is a "[I]OR -- plus a grace period if close to finished[/I]" beyond the year and beyond the 3.33 days for every % above 10%, so it might survive longer or until it finishes, and according to Madpoo's ETA it will finish in June if left alone and if it keeps the current progress rate. |
[QUOTE=ATH;400497]
there is a "[I]OR -- plus a grace period if close to finished[/I]" beyond the year and beyond the 3.33 days for every % above 10%, so it might survive longer .[/QUOTE] You misunderstand the SQL, a danger when I posted just a SQL snippet. The SQL comment refers to the previous code on the line. The OR starts the clauses for non-grandfathered assignments that was not posted. |
The progress on exponents 55027163, 55059383, and 55079077 appears to me to be very spiky. I believe these can jump a whole percentage point in a single day (certainly within 2 days). But typical daily progress is much lower. My current projection for 55027163 (based on data from Feb. 10) is that it will expire on Sept 3 at about 99.1% done (663 days old). But with the spiky nature of its progress it could easily get completed before expiring. EDIT: Correction, I get 662 days for Sept. 3, and I get Sept. 6 for the 665 day limit.
|
[QUOTE=ATH;400497]The last one at 98.70% I'm not sure of.[/QUOTE]
I have a hunch that that one may just finish up in time... M55861261 is currently showing a "real ETA" of 2015-07-20 based on it's tracked progress of 0.01411881583818 % per day for the 7.08 days between it's first and last checkin since I started tracking. (it went from 98.6% to 98.7% in that timeframe). It's a lousy rate of progress, but given that it only has 1.3% to go that's just another 92 days. It'll be a squeaker but it just might stay ahead of the reaper by a nose. |
[QUOTE=Madpoo;400514]I have a hunch that that one may just finish up in time...
M55861261 is currently showing a "real ETA" of 2015-07-20 based on it's tracked progress of 0.01411881583818 % per day for the 7.08 days between it's first and last checkin since I started tracking. (it went from 98.6% to 98.7% in that timeframe). It's a lousy rate of progress, but given that it only has 1.3% to go that's just another 92 days. It'll be a squeaker but it just might stay ahead of the reaper by a nose.[/QUOTE] It's gone up at least 0.4% this month and up 0.8% since March 26. My projection for it has finishing in 25 days (May 15). But that user needs to pick up the pace on M55771997 to finish it before expiring, by my projections. |
[QUOTE=cuBerBruce;400518]It's gone up at least 0.4% this month and up 0.8% since March 26. My projection for it has finishing in 25 days (May 15). But that user needs to pick up the pace on M55771997 to finish it before expiring, by my projections.[/QUOTE]
Here's a recent projection: [CODE]exponent RealEta 54674791 2016-03-17 20:18:11.823 54759797 2016-03-26 00:59:11.540 55027163 2015-08-29 14:48:46.150 55059383 2015-08-28 10:50:46.320 55079077 2015-08-31 04:16:46.487 55771997 2017-05-29 10:47:20.980 55861261 2015-06-29 19:30:21.417[/CODE] Those first 2 in the 54M range haven't updated since April 15, so... 54357769 has checked in a few times since I started logging the progress, but I have no projected ETA for it because in the 10 day interval I have for it (3 check ins) it hasn't moved a single tick from 97.8% done. I have no idea of how fast it's going except at the rate of 0% per day it will never finish. :smile: |
[QUOTE=Madpoo;400888]54357769 has checked in a few times since I started logging the progress, but I have no projected ETA for it because in the 10 day interval I have for it (3 check ins) it hasn't moved a single tick from 97.8% done. I have no idea of how fast it's going except at the rate of 0% per day it will never finish. :smile:[/QUOTE]
Well, M54357769 has now gone up another "tick." (By a "tick," I mean a tenth of a percentage point - the amount of resolution that Active Assignments page will show, it appears.) It's been at least two weeks since reported progress on this one went up a tick. It needs to go up a tick every two days (approximately), or else the user is only going to get credit for it as a double-check, at best. (I note Madpoo has indicated he's already completed a "double-check" on it without reporting it, so as to allow the current assignee a chance to get the "first LL" credit that he/she should be allowed to get.) This assignment is so far along, and yet the user lately seems to have a lack of intention of finishing it before it expires. I note also that 55027163, 55059383, and 55079077 have each had another little burst of progress. This should have some impact on Madpoo's "real ETA" figures. |
[QUOTE=cuBerBruce;400982]Well, M54357769 has now gone up another "tick." (By a "tick," I mean a tenth of a percentage point - the amount of resolution that Active Assignments page will show, it appears.) It's been at least two weeks since reported progress on this one went up a tick. It needs to go up a tick every two days (approximately), or else the user is only going to get credit for it as a double-check, at best. (I note Madpoo has indicated he's already completed a "double-check" on it without reporting it, so as to allow the current assignee a chance to get the "first LL" credit that he/she should be allowed to get.) This assignment is so far along, and yet the user lately seems to have a lack of intention of finishing it before it expires.
I note also that 55027163, 55059383, and 55079077 have each had another little burst of progress. This should have some impact on Madpoo's "real ETA" figures.[/QUOTE] Ah... now that M54357769 has actually gone up by at least 0.1% I was able to make a prediction: 2016-02-09 That's based on working at a rate of 0.1 % over an observed 13.755 days, or 0.00727 % per day. :) All 3 of those 54M exponents aren't predicted to complete any time this year. More like Feb/March of 2016. Out of the 55M exponents, 4 of the 5 *should* finish up in July, and 55771997 might finish around 2017-10-23 (of course it would expire first). |
Completing "Classical" GIMPS
1 Attachment(s)
Per the attached "classic colorful stats" report, less than 8M P-90 CPU years remain in the countdown to completing a first-time check of all candidates in the "classical" GIMPS space (exponents [I]p[/I] < 79.3M). This is equivalent to 40.6M GHz-days, or just over eleven months of computing time at GIMPS' recent 30-day sustained throughput of 239 TFLOP/s (119,500 GHz-days/day).
|
| All times are UTC. The time now is 22:21. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.