![]() |
Chalsall,
I was wondering, seeing as we are starting to 'knock on the door' of DC exp's to take to ^70, is it possible to set up a check box to get 'default' limits so if there happen to be any ^69 exp's they are picked up as ^69 and the ^70's as ^70's? |
[QUOTE=KyleAskine;288581]So much for trying to fight for #3.
One of my PC's (w/ HD5870) was crashed the entire weekend, so I lost around half of my throughput since the last time I updated. :no:[/QUOTE] Did you lose your results? If you need data recovery, maybe I can help. |
Bad Rationale
[QUOTE=James Heinrich;288588] P-1 assignments in the current range (around 50M) will have bounds in the order of roughly B1=500,000; B2=10,000,000 which is a far cry from B1=75,000. Why that particular number for B1? That would've been selected as a typical B1 when working on assignments in the 5M range.[/QUOTE]
I was looking to do short-term P-1 assignments in, say, the 1XXM exponent range, with my...less than desirable memory (2GB installed). Next question-Would running B1=B2 get rid of stage 2, thus not being so RAM-intensive? If so, what would the ideal B1 be for the 50M range or so? |
B1=B2 means no stage 2, yes. On the other hand, depending on what OS/programs you're running, if you had only one P-1 worker and gave it like 1200-1500MB, that would be more than enough to do a decent P-1 (much better than B1=B2, even with increased B1).
It is my personal opinion that B1=B2 runs are not worth it. If GIMPS somehow ever pulls ahead of the LL wave with P-1 (yeah right) the next thing I'm doing is going back and redoing all those. |
[QUOTE=c10ck3r;288596]I was looking to do short-term P-1 assignments in, say, the 1XXM exponent range, with my...less than desirable memory (2GB installed).[/quote]If I may suggest: don't. If you do it with poor bounds, there's a reasonable chance that someone may either a) waste time running L-L on an exponent that should've already had a factor found; or b) waste time re-running the P-1 with proper bounds. Running P-1 with minimal memory is one thing at the current wavefront, it will inevitably happen; running it years ahead of the wavefront on knowingly suboptimal hardware is unwise.
Running [URL=http://mersenne-aries.sili.net/prob.php?exponent=100000000&guess_saved_tests=2]P-1 on M100,000,000[/URL] should require a minimum of around 900MB of RAM allocated to Prime95 (but would prefer 2GB-20GB). [quote]Next question-Would running B1=B2 get rid of stage 2, thus not being so RAM-intensive?[/quote]Yes. It will also drop the factor probability from ~5% to ~3% for the same runtime effort (meaning your P-1 factoring is only 60% as efficient as letting it run stage2). [quote]If so, what would the ideal B1 be for the 50M range or so?[/quote]Whatever Prime95 picks with as much RAM as you can let it have. (Seriously, it's a complex iterative calculation to balance probability vs effort breakevens). Even 500MB is acceptable for a P-1 in the 50M range, and highly preferable to a B1=B2 run. For example: 50M with 500MB allocated gets you [URL=http://mersenne-aries.sili.net/prob.php?exponent=50000000&guess_saved_tests=2&factorbits=72]4.21% at 3.37GHz-days[/URL]. but 50M with B1=B2 gets you [URL=http://mersenne-aries.sili.net/prob.php?exponent=50000000&work=3.371327&factorbits=72&b1only=1]2.63% at 3.37GHz-days[/URL] (same effort, lower probability) or 50M with B1=B2 gets you [URL=http://mersenne-aries.sili.net/prob.php?exponent=50000000&prob=4.212662&factorbits=72&b1only=1]4.21% at 13.80GHz-days[/URL] (same probability, much higher effort) |
[QUOTE=KyleAskine;288581]So much for trying to fight for #3.
One of my PC's (w/ HD5870) was crashed the entire weekend, so I lost around half of my throughput since the last time I updated. :no:[/QUOTE] I offer sympathy for unexpected occurrences. Just remember, This Too Shall Pass, in the long run. This race (if you want to call it that) is far from over. |
[QUOTE=James Heinrich;288601]
For example: 50M with 500MB allocated gets you [URL="http://mersenne-aries.sili.net/prob.php?exponent=50000000&guess_saved_tests=2&factorbits=72"]4.21% at 3.37GHz-days[/URL]. but 50M with B1=B2 gets you [URL="http://mersenne-aries.sili.net/prob.php?exponent=50000000&work=3.371327&factorbits=72&b1only=1"]2.63% at 3.37GHz-days[/URL] (same effort, lower probability) or 50M with B1=B2 gets you [URL="http://mersenne-aries.sili.net/prob.php?exponent=50000000&prob=4.212662&factorbits=72&b1only=1"]4.21% at 13.80GHz-days[/URL] (same probability, much higher effort)[/QUOTE] I don't understand the last one? |
[QUOTE=flashjh;288606]I don't understand the last one?[/QUOTE]
I don't understand your question? |
[QUOTE=Dubslow;288608]I don't understand your question?[/QUOTE]
I figured it out... I wasn't looking at the links. B1=B1 with two different results, but the links explain it. |
[QUOTE=oswald;288591]Did you lose your results? If you need data recovery, maybe I can help.[/QUOTE]
Oh no, it just hard locked (as Linux tends to do when the video driver explodes). All hardware is fine. I got a touch too aggressive with my O/C, and I didn't watch it long enough to see if it was fine. |
[QUOTE=bcp19;288589]I was wondering, seeing as we are starting to 'knock on the door' of DC exp's to take to ^70, is it possible to set up a check box to get 'default' limits so if there happen to be any ^69 exp's they are picked up as ^69 and the ^70's as ^70's?[/QUOTE]
OK, I've added the same options for the DCTF assignments page as on the LLTF -- Lowest TF level, Highest TF level, Lowest Exponent, Oldest Reserved (from PrimeNet) and "What Makes Sense". I've also changed the default Pledge level to 70, but people can of course lower this to 69 if they want. If the Pledge is 70 and "What Makes Sense" is the option, then only candidates above 29.69M will be assigned. |
[QUOTE=KyleAskine;288630]Oh no, it just hard locked (as Linux tends to do when the video driver explodes). All hardware is fine.
I got a touch too aggressive with my O/C, and I didn't watch it long enough to see if it was fine.[/QUOTE] That's good, but frustrating. I feel for you. When I've had hard locks, it was usually about 30 minutes after the last time I had looked at the machine. |
[QUOTE=oswald;288642]When I've had hard locks, it was usually about 30 minutes after the last time I had looked at the machine.[/QUOTE]
[URL="http://www.cacti.net/"]Cacti[/URL] and [URL="http://www.nagios.org/"]Nagios[/URL] are good friends to have.... :smile: |
I was sad because this peculiar P-1 find could have been found with TF in about the same time (and a tad more GHz)
|
[QUOTE=firejuggler;288645]I was sad because the this P-1 find could have be found in about the same time it took me with TF (and a tad more GHz)[/QUOTE]
Ah... Gottcha. However, it is important to remember that this "game" is all about the aggregate statistics and probabilities. Where do the "curves" cross? While in this particular case your above is true, over the full set it isn't. |
Just a note...
So everyone knows, we are currently [B][I][U]well[/U][/I][/B] ahead of the DC "wave-front". However, we're still working deep within the LL "wave", although pulling ahead. Thus, I have made a note on the DCTF assignment page suggesting that workers consider doing LLTF work instead. As always, I'm a strong believer in the GIMPS philosophy that people should do the work they enjoy doing. However, please know that at this point in time the best thing for GIMPS is LLTF work, not DCTF work. |
should I unreserve my DCTF (upon completion of my current) work and do LLTF? (2 29M and 25 30M)
|
[QUOTE=firejuggler;288654]should I unreserve my DCTF (upon completion of my current) work and do LLTF? (2 29M and 25 30M)[/QUOTE]
Entirely up to you. This is simply an observation on what is best for GIMPS. |
[QUOTE=chalsall;288651]Just a note...
So everyone knows, we are currently [B][I][U]well[/U][/I][/B] ahead of the DC "wave-front". However, we're still working deep within the LL "wave", although pulling ahead.[/QUOTE] I have had one mfaktc instance doing LL-TF, and one doing DC-TF. In light of the current status of those areas I'll start adding LL-TFs to the second worker. I'll still let the DCs finish, though. |
Quarter of a million GHz Days saved!!!
Hey all.
After only a little more than three months of work we have found 1,655 factors, saving a total of 250,131 GHz days (685 GHz years) of LL, DC and P-1 work!!! :smile: |
[QUOTE=chalsall;288651]Just a note...
So everyone knows, we are currently [B][I][U]well[/U][/I][/B] ahead of the DC "wave-front". However, we're still working deep within the LL "wave", although pulling ahead. Thus, I have made a note on the DCTF assignment page suggesting that workers consider doing LLTF work instead. As always, I'm a strong believer in the GIMPS philosophy that people should do the work they enjoy doing. However, please know that at this point in time the best thing for GIMPS is LLTF work, not DCTF work.[/QUOTE] I'll finish my current DCTFs and then move to all LLTFs. Thanks! |
Whoa. I totally just got assigned literally the first exponent after 45,000,000, which is 45,000,017. I will show up in PrimeNet in around 40 minutes, when the comp does its daily update. (chalsall could probably verify this as well.)
40 minutes later: [url]http://www.mersenne.org/report_exponent/?exp_lo=45000000&exp_hi=45000100&B1=Get+status[/url] (It'll be a couple of months though, it's on my new-laptop-that-I-only-recently-got-running.) Also, while I have a post going, I should point out that the sum of the [URL="http://gpu72.com/reports/available/"]yellow column[/URL] has been slowly creeping up all night. It was at ~465 like 5 hours ago. We gonna need a lot more P-1. |
[QUOTE=chalsall;288651]Just a note...
So everyone knows, we are currently [B][I][U]well[/U][/I][/B] ahead of the DC "wave-front".[/QUOTE] Coincidentally, there are no more DCTF assignments available. |
[QUOTE=firejuggler;288645]I was sad because this peculiar P-1 find could have been found with TF in about the same time (and a tad more GHz)[/QUOTE]
I just found [URL="http://mersenne-aries.sili.net/exponent.php?exponentdetails=54917279"]this exponent[/URL] today in P-1 That is how it goes :smile: |
[QUOTE=ckdo;288736]Coincidentally, there are no more DCTF assignments available.[/QUOTE]
Yeah... Someone (who will remain unnamed but can easily be determined) reserved 6,000 DCTFs... :cry: I've had spidy grab another 2,000 for those who insist on doing DCTF work. But, again, please consider doing LLTF work instead -- that's were we really need the firepower right now. |
[QUOTE=Dubslow;288709]Also, while I have a post going, I should point out that the sum of the [URL="http://gpu72.com/reports/available/"]yellow column[/URL] has been slowly creeping up all night. It was at ~465 like 5 hours ago. We gonna need a lot more P-1.[/QUOTE]
I think (hope?) you'll find over the next few days this will drop down again. BTW, you'd asked a while back for the [URL="http://www.gpu72.com/reports/released_level/"]Released Level[/URL] report to be able to show those who have been released [URL="http://www.gpu72.com/reports/released_level/p-1/"]with P-1 completed[/URL], and those [URL="http://www.gpu72.com/reports/released_level/nop-1/"]without[/URL]. Click on the column headers to switch the views, or from the drop-down menus. And, just to be explicit, not all of those released with P-1 had the P-1 done through the system. |
[QUOTE=KyleAskine;288743]I just found [URL="http://mersenne-aries.sili.net/exponent.php?exponentdetails=54917279"]this exponent[/URL] today in P-1
That is how it goes :smile:[/QUOTE]You never know what you'll get with P-1. Sometimes you'll get a factor just fractionally over the previous TF limit, as you just did. Other times you get a huge factor (e.g. [url=http://mersenne-aries.sili.net/M50232683]M50232683[/url]) that would take a hundred-million-million GPU-years to find... |
Dry spell
Wow! The last LL-TF factor found here was on 12/24/2012.:huh: There have been quite a few in the DC range, though.
|
[QUOTE=kladner;288768]The last LL-TF factor found here was on 12/24/2012.:huh:[/QUOTE]My [url=http://mersenne-aries.sili.net/M50632559]last LL-TF[/url] factor was 7 days ago.
And how do you know what factors will be found in Dec 2012? :ermm: |
[QUOTE=kladner;288768]Wow! The last LL-TF factor found here was on 12/24/2012.:huh: There have been quite a few in the DC range, though.[/QUOTE]
Sorry to hear that... But if you take a look at the [URL="http://www.gpu72.com/reports/overall/"]overall system stats[/URL], you'll see that LLTF finds more factors / candidate than DCTF. LLTF: 992 factors for 27,676 candidates. DCTF: 381 factors for 33,303 candidates. Of course, from a per GHz Days of TF effort perspective, this is not true. But then, the savings per LL factor found is much higher. And, at the end of the day, this project was always intended to help GIMPS. |
[QUOTE=James Heinrich;288770]My [URL="http://mersenne-aries.sili.net/M50632559"]last LL-TF[/URL] factor was 7 days ago.
And how do you know what factors will be found in Dec 2012? :ermm:[/QUOTE] Oops. 2011.:redface: Would that I had such foresight. I'd be playing the markets and the ponies big time.:smile: EDIT: @chalsall: I was just whining, anyway. All work accomplished has value. But with my luck running this way, it's a good thing I'm not playing the stock markets or the ponies! |
If TF starts outpacing P-1 and LL, are there any thoughts to lowering the exponent limit for TF to 73? How are we doing compared to the LL wave?
|
[QUOTE=chalsall;288771]But if you take a look at the [URL="http://www.gpu72.com/reports/overall/"]overall system stats[/URL], you'll see that LLTF finds more factors / candidate than DCTF.[/QUOTE]
Statin' the bleedin' obvious, aren't we? LLTF candidates get like 3.5 bit levels of TF on average while DCTF candidates get only 1.1 bit levels on average. I'm not exactly surprised. :no: |
[QUOTE=ckdo;288790]Statin' the bleedin' obvious, aren't we? LLTF candidates get like 3.5 bit levels of TF on average while DCTF candidates get only 1.1 bit levels on average. I'm not exactly surprised. :no:[/QUOTE]
But that's because the DCTF candidates have already been taken above where they nominally are. See [URL="http://mersenne.org/various/math.php"]Mersenne.org's Math page[/URL]. G72 is cooridinating taking all candidates to be 4 bit levels above what was nominal before GPUs entered the equation. And, again, the project is currently well ahead of the DC wave front, while still working [B][I][U]within[/U][/I][/B] the LL wave. |
[QUOTE=KyleAskine;288786]If TF starts outpacing P-1 and LL, are there any thoughts to lowering the exponent limit for TF to 73?[/QUOTE]
Could you please restate your question, because as is it doesn't make sense (at least to me). We are currently only taking candidates to 72 bits, except those above 58.52M. [QUOTE=KyleAskine;288786]How are we doing compared to the LL wave?[/QUOTE] We are currently pulling ahead in the "wave", but are still working within it. |
[QUOTE=chalsall;288793]Could you please restate your question, because as is it doesn't make sense (at least to me).
We are currently only taken candidates to 72 bits, except those above 58.52M. [/QUOTE] When I say 'lowering the exponent limit' I mean, are there thoughts of taking exponents smaller than 58.52M to 73 instead of 72, assuming we are beating LL and P-1. |
[QUOTE=chalsall;288793]Could you please restate your question, because as is it doesn't make sense (at least to me).[/QUOTE]
I'm stupid... I now understand your language. While lowering the boundry to TF LL candidates to 73 to be below the current 58.52M limit is at least three months off, this might be considered now for DC candidates. As in, lowering the limit to take DC TF candidates to 70 to be below 29.69M. Thoughts DCTF'ers? |
[QUOTE=KyleAskine;288794]When I say 'lowering the exponent limit' I mean, are there thoughts of taking exponents smaller than 58.52M to 73 instead of 72, assuming we are beating LL and P-1.[/QUOTE]
We cross-posted... At this point in time, while we are now pulling ahead in the LL "wave", we don't have enough firepower to take that step (yet). Spidy is still finding, and throwing back, candidates only TFed to 71 above 55M at the moment. |
Can we activate html in posts?
[QUOTE=Spidy;http://gpu72.com/reports/overall/]
Assigned:[indent][indent][indent] TF[/indent][/indent][/indent]Double Check: 12,679 Lucas-Lehmer: 4,442 [/QUOTE] akjsdhf;aksdh;ckalehdckaubjsd;ka |
[QUOTE=Dubslow;288813]akjsdhf;aksdh;ckalehdckaubjsd;ka[/QUOTE]
You're refering [URL="http://www.gpu72.com/reports/overall/"]to this[/URL]... Yeah... When Giants fight, duck.... :smile: |
[QUOTE=chalsall;288814]You're refering [URL="http://www.gpu72.com/reports/overall/"]to this[/URL]...
Yeah... When Giants fight, duck.... :smile:[/QUOTE] Also, check out the factors found in the last day. [FONT="System"][SIZE="1"]It's like when you said "Do more LL!" they heard "Do no LL!"[/SIZE][/FONT] |
heheheh
Xyzzy and I have been at a battle for a while now on factors found metric. It went up a notch last night when I noticed Xyzzy submitting a lot of TF 2^69 results and his factors found metric shot up. Also I noted people below us were churning out a fair number of factors too. I went ballistic and sucked up a whack of DCTF work. I'm doing 69-71 DCTF with Stages=0 work. I did leave some DCTF work. I only have 1x GTX460 doing DCTF currently. But over the course of the next 7days or so, the rest of the farm will migrate to DCTF work. Post the CUDA4.1 upgrade of mfaktc I was doing 1800GHzdays/day (ish). (That's 1x460GTX with DCTF, the rest LLTF). 5 GMT day average based on figures taken from [url]http://www.mersenne.org/results/[/url] Once my whole farm is fully doing DCTF, it'll be interesting to see the GHZ-days/day output. So this current swag of work (6000ish DCTF 69-71) should be done in about 3 weeks. -- Craig |
[QUOTE=ckdo;288790]Statin' the bleedin' obvious, aren't we? LLTF candidates get like 3.5 bit levels of TF on average while DCTF candidates get only 1.1 bit levels on average. I'm not exactly surprised. :no:[/QUOTE]
And additionally ye forget to mention that all DCTF had P-1 done, but only few of LLTF had P-1 done. For DC front the expos were already filtered, many of them were eliminated by P-1-found factors, and only the "tough" one remain into the list. That is why I said in the past there is no worth to do DCTF over 69 bits. One can find a factor every 2 days, or in the luckiest case, every day and half, but he would need only 24-26 hours to clear the exponent by doing DCLL. That is why I concentrated on LL-tests at DC front, and not DCTF. |
[QUOTE=LaurV;288834]And additionally ye forget to mention that all DCTF had P-1 done, but only few of LLTF had P-1 done. For DC front the expos were already filtered, many of them were eliminated by P-1-found factors, and only the "tough" one remain into the list. That is why I said in the past there is no worth to do DCTF over 69 bits. One can find a factor every 2 days, or in the luckiest case, every day and half, but he would need only 24-26 hours to clear the exponent by doing DCLL. That is why I concentrated on LL-tests at DC front, and not DCTF.[/QUOTE]
I'm not in the project just an onlooker but it sounds as though the project is getting what it's doing for GIMPS back in the DCTF range, so basically I think the original plan was to do what's happening in the DCTF range for the LLTF range. so that gimps can skim the top of the pond so to speak the problem is getting the rates to equalize so one doesn't surpass the other in the wrong areas. |
[QUOTE=Dubslow;288709]
Also, while I have a post going, I should point out that the sum of the [URL="http://gpu72.com/reports/available/"]yellow column[/URL] has been slowly creeping up all night. It was at ~465 like 5 hours ago. We gonna need a lot more P-1.[/QUOTE] AAAIIIIIEEEEEEEEEEEE!!! It's at 567!!! What are we ever going to do? We need (A LOT) more P-1... |
[QUOTE=Dubslow;288861]AAAIIIIIEEEEEEEEEEEE!!!
It's at 567!!! What are we ever going to do? We need (A LOT) more P-1...[/QUOTE] There is only one thing to do. TOGA! TOGA! TOGA! |
[QUOTE=kladner;288768]Wow! The last LL-TF factor found here was on 12/24/2012.:huh: There have been quite a few in the DC range, though.[/QUOTE]
First, a correction: The date given above should have been 1/24/2012.:doh!: The day and year were correct. I screwed up the month. On that date, one mfaktc worker found a 48M factor in the 71-72 range. This day had fairly typical production for the period when I was running the GPU split between LL and DC TF. [CODE]1/24/12 48M factor 71-72 LLTF levels/exp. 2 69-72 1 70-72 2 69-71 -includes the run which found a factor in the 71-72 range. 1 69-70 38 NF DC range, 68-69[/CODE]Since then, until yesterday (2/09), when the last DCTFs cleared the second worker, it had found 11 factors in the 29M and 30M range. On three occasions it found 2 factors in one day. Prior to 1/24/12 I have to go back to 1/15/12 to see another LLTF factor. In that period, 5 DC factors were found, with another 2 days with 2 factors each found. Note that the first DC range results appear on 1/19/12: 4 days later. All I'm really saying is that during the ~3 weeks I have done DCTF work, DC has kicked out a lot more factors than LLTF. I understand that this flies in the face of the overall statistics. But that's what happened for me. |
[QUOTE=kladner;288910]
All I'm really saying is that during the ~3 weeks I have done DCTF work, DC has kicked out a lot more factors than LLTF. I understand that this flies in the face of the overall statistics. But that's what happened for me. [/QUOTE] Tell me about it... I haven't found 1 LLTF factor since 2012/01/17, I don't know how many times I rebooted my PC or ran the self test to see that everything was fine (I even updated my drivers)... Well I actually found 6 but not from GP272 assignment. In all I'm at 731 consecutive attempt without a factor! :cry: While during that time I was able to do do 167 P-1 attempts and found 10 P-1 factors. :smile: |
[QUOTE=diamonddave;288915]In all I'm at 731 consecutive attempt without a factor! :cry:[/QUOTE]
That is a bit strange... According to the [URL="http://www.gpu72.com/reports/factoring_cost/"]system's stats[/URL], you should see a factor on average every 105.4 attempts (27815 / 264) just going from 71 to 72. |
It's within plausible unlikelyness. Maybe not just do a self test, but find a 50M expo with a factor in the proper range...
|
[QUOTE=chalsall;288916]That is a bit strange... According to the [URL="http://www.gpu72.com/reports/factoring_cost/"]system's stats[/URL], you should see a factor on average every 105.4 attempts (27815 / 264) just going from 71 to 72.[/QUOTE]
Well I did find 10 factors in my last 1000 attempts, they just happened to be bunched up I guess. Actually it might be worse because some attempts had 2 bit level :cry: |
[QUOTE=Dubslow;288920]It's within plausible unlikelyness. Maybe not just do a self test, but find a 50M expo with a factor in the proper range...[/QUOTE]
Good idea. Here are fifty you can test with: [CODE]+----------+------------------------+---------------+ | Exponent | Factor | BitLevel | +----------+------------------------+---------------+ | 44317369 | 2272152083654155841399 | 70.9445495605 | | 45365491 | 2158533078164308867183 | 70.8705444336 | | 46037293 | 1971068856035884695647 | 70.7394714355 | | 46559237 | 1977934284625254635321 | 70.7444839478 | | 46685371 | 2146610330000968110343 | 70.8625488281 | | 46863241 | 4364930190253621432999 | 71.8864517212 | | 46993159 | 1330531357296759134879 | 70.1724929810 | | 47260573 | 1743533387286218260247 | 70.5625076294 | | 47560171 | 1181823099220441081631 | 70.0015029907 | | 47633413 | 1790083758666281158159 | 70.6005172729 | | 47806417 | 3521146432239008772113 | 71.5765380859 | | 47927531 | 2355381346990999547839 | 70.9964523315 | | 48095011 | 3029315848378018189801 | 71.3594818115 | | 48834119 | 2934281118499418456407 | 71.3134994507 | | 48956057 | 3305929575479275129607 | 71.4855422974 | | 49022137 | 1487672435909624906681 | 70.3335494995 | | 49117553 | 1627917615725480286791 | 70.4635162354 | | 49229507 | 2106693543295212625697 | 70.8354721069 | | 49374713 | 3331038078186391131217 | 71.4964599609 | | 49435723 | 3815810590186900365521 | 71.6924819946 | | 49924067 | 3776282376041628861521 | 71.6774597168 | | 49944373 | 2474194250049603381857 | 71.0674514771 | | 50239463 | 1745949344651080378721 | 70.5644989014 | | 50549153 | 1623413742346393982417 | 70.4595184326 | | 50710841 | 3262620573998896985551 | 71.4665222168 | | 51533983 | 3997382094836556464167 | 71.7595443726 | | 51971147 | 2121383824190682239183 | 70.8454971313 | | 52119871 | 3882653254679473450247 | 71.7175292969 | | 52222231 | 1688806059582527360879 | 70.5164947510 | | 52248761 | 3708847255636615579439 | 71.6514587402 | | 52268947 | 2164495094754430986599 | 70.8745193481 | | 52311811 | 1717095357726153258793 | 70.5404586792 | | 52312411 | 1952122654783344353833 | 70.7255325317 | | 52407793 | 2357049987780116445823 | 70.9974746704 | | 52453741 | 2498484590663080295807 | 71.0815429688 | | 52454057 | 2409990193721325905647 | 71.0295181274 | | 52457381 | 2061913063008875827129 | 70.8044738770 | | 52483367 | 1236324007342449172201 | 70.0665435791 | | 52565963 | 3665573401554550494847 | 71.6345291138 | | 52644311 | 3766067728646690119423 | 71.6735458374 | | 52757329 | 3520990875781027910761 | 71.5764694214 | | 52776673 | 1282574177694372975761 | 70.1195297241 | | 52883993 | 1360342975683617703487 | 70.2044601440 | | 52972313 | 3166736005990831004767 | 71.4234848022 | | 53004071 | 1418169247727039506759 | 70.2645187378 | | 53327473 | 1229478248730006387337 | 70.0585327148 | | 54903491 | 3099550058950900989607 | 71.3925476074 | | 54919057 | 3046222617434999481671 | 71.3675079346 | | 58055993 | 3171218011593327976087 | 71.4255294800 | | 58950481 | 1291456699548445340951 | 70.1294860840 | +----------+------------------------+---------------+ [/CODE] |
[QUOTE=chalsall;288923]Good idea.
Here are fifty you can test with: [/QUOTE] I'll check those tonight when I get home... will keep you guys posted thx |
[QUOTE=Dubslow;288861]AAAIIIIIEEEEEEEEEEEE!!!
It's at 567!!! What are we ever going to do? We need (A LOT) more P-1...[/QUOTE] Or it could just be that flash went crazy. Crisis solved (for now). (And really only by handwaving anyways, but I'm a physicist, so that's okay :smile:) |
[QUOTE=Dubslow;288935]Or it could just be that flash went crazy. Crisis solved (for now).
(And really only by handwaving anyways, but I'm a physicist, so that's okay :smile:)[/QUOTE] Am I :loco: ? Well maybe. Anyway, I added four new P-1 machines today to help out. I don't know how efficient they're going to be yet, so I'll adjust as I can to balance TF and P-1. I'll see if I can add any more P-1 capability from the systems I have, but I'm close to maxed out now. |
I've pulled an old laptop with a broken hd out of the closet. It's a P4@2.4GHZ.
Found an old flash drive and installed Puppy Linux on it. I'll add it as a P-1 machine. |
I've brought my bulldozer back into the game.
All 8 cores are doing P-1 now. Each core seems to take about 2 days to do each one. I have about 60 P-1 allocated to it. I hope this helps. -- Craig |
[QUOTE=oswald;288978]I've pulled an old laptop with a broken hd out of the closet. It's a P4@2.4GHZ.
Found an old flash drive and installed Puppy Linux on it. I'll add it as a P-1 machine.[/QUOTE] [QUOTE=nucleon;288979]I've brought my bulldozer back into the game. All 8 cores are doing P-1 now. Each core seems to take about 2 days to do each one. I have about 60 P-1 allocated to it. I hope this helps. -- Craig[/QUOTE] Awesome! |
I tried to fire up my Opteron x2, but it hasn't changed its "mind". I haven't gotten around to swapping the PSU. EDIT: (to see if that makes a difference.)
|
[QUOTE=flashjh;288980]Awesome![/QUOTE]
Indeed. Thanks guys. Also, GrunwalderGIMP has joined our P-1 effort, and Ethan (EO) has returned after a long hiatus. |
Wow, I'm impressed guys.
|
This just in: Some of monst's P-1s are without Stage 2.
[url]http://mersenne-aries.sili.net/index.php?showuserexponents=monst&usercompid=882[/url] :( |
[QUOTE=Dubslow;289096]This just in: Some of monst's P-1s are without Stage 2[/QUOTE]That is most unfortunate. The whole point of us specifically doing P-1 work is to do it better than random GIMPS user would. Those exponents are actually done much worse -- the B1 chosen is appropriate if stage2 is to be performed, but it wasn't. Even random GIMPS user who allocates no RAM for P-1 would do a better P-1 because a much larger B1=B2 would be chosen. :sad:
|
[QUOTE=James Heinrich;289135]That is most unfortunate. The whole point of us specifically doing P-1 work is to do it better than random GIMPS user would. Those exponents are actually done much worse -- the B1 chosen is appropriate if stage2 is to be performed, but it wasn't. Even random GIMPS user who allocates no RAM for P-1 would do a better P-1 because a much larger B1=B2 would be chosen. :sad:[/QUOTE]
Damn, damn, damn.... :cry: I hadn't even thought to keep an eye out for this, never thinking it would happen. James, any idea of a Prime95/mprime (mis)configuration which would result in only stage 1 being done in this manner? Of monst's 982 P-1 completions, 279 are with B1==B2. And all but six have already been returned to Prime net. However, in addition, Bdot, 1997rj7, kurly, and Stef42 have also had one each with B1==B2, and Jerry Hallett has had two. Any suggestions on how we can avoid this in the future? Should I reissue for another P-1 run in such cases? In the case of a better run the second worker would get the credit on PrimeNet (and G72). Edit: I have sent a PM to monst bringing this to his attention. |
[QUOTE=chalsall;289146]However, in addition, Bdot, 1997rj7, kurly, and Stef42 have also had one each with B1==B2, and Jerry Hallett has had two.[/QUOTE]
:redface: I checked all my P95 computers, I don't know why this happened? All have plenty of memory allocated. |
[QUOTE=flashjh;289147]:redface: I checked all my P95 computers, I don't know why this happened? All have plenty of memory allocated.[/QUOTE]
That's why I asked James if it might be a Prime95 configuration (or other) issue. So you can check, your two were: 49232621,560000,560000 49235027,560000,560000 |
[QUOTE=chalsall;289148]That's why I asked James if it might be a Prime95 configuration (or other) issue.[/QUOTE]
I know, but it still stinks. [QUOTE]So you can check, your two were: 49232621,560000,560000 49235027,560000,560000[/QUOTE] I will, thanks. |
I checked all 600 P-1 that GIMPS lists for me as NF-PM1 for the last 365 days. Only 11 of them had B2 slightly below 10M. The minimum during my GPU-2-72 time was
[SIZE=2]45952603[/SIZE][SIZE=2] NF-PM1[/SIZE][SIZE=2] 2011-12-26 20:49 [/SIZE][SIZE=2]0.0[/SIZE][SIZE=2] B1=440000, B2=8800000[/SIZE] Not sure if it was a GPU-2-72 assignment. But I have not submitted any NF-PM1 result below these limits. I see two possible explanations: Either this is an assignment that prime95 automatically unreserved without me noticing it, and some "random" GIMPS user did the P-1, or it was a factor-found result during stage 1, and therefore did not have S2. I usually assign enough memory to mprime/prime95; I already feel bad if I see an "E=6" instead of the usual "E=12". Please let me know my bad one, I still have all logs. If I did it, then I'll find it. |
[QUOTE=Bdot;289160]Not sure if it was a GPU-2-72 assignment.
Please let me know my bad one, I still have all logs. If I did it, then I'll find it.[/QUOTE] Yes, 45952603 was a GPU72 assignment, which had already been TFed to 72. Your "bad one" was 49652243,640000,640000, which has been TFed to 71. I am confused by this. The "PFactor=N/A,1,2,[EXPONENT],-1,[TFLEVEL],2" line is suppost to make Prime95 choose optimal bounds, and the fact that five of you have had one (or two) such situations while the rest were "nominal" is strange. In addition, based on the data from James' site, it appears only one of monst's machines is doing this. [B][I][U]Edit[/U][/I][/B]: [B]WAIT[/B]!!! 49652243 was one of the ones which experienced the "reassignment" bug from the end of last year. The B1=B2 result was submitted to PrimeNet by monst. Possibly the other five were as well, although it's strange that PrimeNet didn't accept the better P-1 work. Let me drill-down and report back. But if you (Bdot) could look at what your logs show for both examples, it would be useful as well. |
Not enough memory for stage2?
-- Craig |
[QUOTE=chalsall;289146]James, any idea of a Prime95/mprime (mis)configuration which would result in only stage 1 being done in this manner?[/quote]Not offhand, no. The only Prime95 configuration that should affect this would be amount of RAM allocated, to determine whether stage2 should be done, and with what bounds. But if it was doing stage1-only due to lack of RAM, it would pick much higher B1=B2: Normal P-1 would be B1 ~500,000 and B2 ~12,000,000; with B1=B2 it would be ~1,200,000.
There are, of course, a plenitude of ways to "misconfigure" the worktodo to make it behave that way; the most obvious of which is to specify explicit bounds with Pminus1= instead of the usual Pfactor= lines. [QUOTE=chalsall;289146]Any suggestions on how we can avoid this in the future? Should I reissue for another P-1 run in such cases? In the case of a better run the second worker would get the credit on PrimeNet (and G72).[/QUOTE][QUOTE=chalsall;289163]49652243 was one of the ones which experienced the "reassignment" bug from the end of last year. The B1=B2 result was submitted to PrimeNet by monst. Possibly the other five were as well, although it's strange that PrimeNet didn't accept the better P-1 work.[/QUOTE]Note that PrimeNet is a little weird in that it doesn't consider a subsequent P-1 run "better" unless B1.new > B1.old. So, for example, if it (*M50M, TF=70) was poorly done once, with B1=B2=500,000 (=[url=http://mersenne-aries.sili.net/prob.php?exponent=50000000&b1=500000&b2=500000&factorbits=70]2.88%[/url]) and then re-done later with B1=490,000; B2=11,500,000 (=[url=http://mersenne-aries.sili.net/prob.php?exponent=50000000&b1=490000&b2=11500000&factorbits=70]4.93%[/url]), PrimeNet will ignore the new result even though it's arguably a better P-1, it doesn't meet the definition of "better" = "bigger B1". |
Partial (but not full) explination...
Hey all.
OK, this is a "mash-up" of a query from the GPU72 database interweaved with queries from PrimeNet: [CODE]49652243 - 2^71 B1=775000 by "ANONYMOUS" on 2012-02-09 +---------------------+----------+---------------+ | Assigned | FactFrom | DisplayName | | 2011-11-25 14:05:56 | 71 | Bdot | | 2011-12-31 21:13:18 | 71 | monst | 56161373 - 2^71 B1=775000 by "ANONYMOUS" on 2012-02-09 | 2012-01-27 01:34:21 | 71 | 1997rj7 | 49152443 - 2^72 B1=415000, B2=7573750 by "kurly" on 2011-11-24 B1=775000 by "ANONYMOUS" on 2011-12-17 | 2011-11-23 16:18:51 | 74 | kurly | 45571601 - 2^72 B1=510000 by "Stef42" on 2011-12-13 | 2011-12-12 21:54:10 | 72 | Stef42 | 49232621 - 2^72 B1=560000 by "Jerry Hallett" on 2012-01-04 | 2012-01-01 14:46:15 | 72 | Jerry Hallett | 49235027 - 2^72 B1=560000 by "Jerry Hallett" on 2012-01-04 | 2011-12-31 23:52:46 | 72 | Jerry Hallett | [/CODE] So, only Bdot's candidate experienced the reassignment bug. 1997rj7's result doesn't show up, while kurly's does. And, strangely, for kurly's [URL="http://www.mersenne.org/report_exponent/?exp_lo=49152443"]49152443[/URL] PrimeNet reports that the "Stage 1 only" effort was "better". Lastly, for Stef42 and Jerry the "Stage 1 only" was what they actually did. Any theories anyone? One explination based on kurly's result is that 1997rj7's result was submitted after ANONYMOUS', and PrimeNet rejected it. But we still have a puzzle as to why (at least) Stef42 and Jerry's machines (Bdot and 1997rj7's might have as well) did these unusual runs.... :sad: |
I'll look up the results when I get home in a bit to see if I can track it down.
|
[QUOTE=James Heinrich;289168]
Note that PrimeNet is a little weird in that it doesn't consider a subsequent P-1 run "better" unless B1.new > B1.old. So, for example, if it (*M50M, TF=70) was poorly done once, with B1=B2=500,000 (=[url=http://mersenne-aries.sili.net/prob.php?exponent=50000000&b1=500000&b2=500000&factorbits=70]2.88%[/url]) and then re-done later with B1=490,000; B2=11,500,000 (=[url=http://mersenne-aries.sili.net/prob.php?exponent=50000000&b1=490000&b2=11500000&factorbits=70]4.93%[/url]), PrimeNet will ignore the new result even though it's arguably a better P-1, it doesn't meet the definition of "better" = "bigger B1".[/QUOTE] Is that something you can change while modifying the TF result parser? |
In case it wasn't clear, I'm kurly. If I can be of any assistance in tracking this down, let me know. But it looks like the problem occurred after I turned in my results.
I like the 'top 100 factors found' list, I noticed I have a bunch of them. :smile: |
[QUOTE=Dubslow;289173]Is that something you can change while modifying the TF result parser?[/QUOTE]It's certainly something I'll look at. To me, factor probability is a more sensible measure of "better" than simple B1.
|
[QUOTE=KingKurly;289176]In case it wasn't clear, I'm kurly. If I can be of any assistance in tracking this down, let me know. But it looks like the problem occurred after I turned in my results.[/QUOTE]
Thanks for the offer kurly. But no need -- your's was a nominal situation. |
Also chalsall, for a P-1 factor, your GD formula is wrong.
[code]**52259027 72 P-1 2012-02-07 05:16:51 2012-02-12 19:14:13 3.685[/code] should be [code]52108879 72 P-1 2012-02-07 05:16:51 2012-02-12 17:15:13 2.893[/code] I checked the PrimeNet credit, and it gave 2.9, not 3.7 for the factor. (Found in Stage 2) |
[QUOTE=James Heinrich;289179]It's certainly something I'll look at. To me, factor probability is a more sensible measure of "better" than simple B1.[/QUOTE]
Indeed. And further, as it is now it means that G72 couldn't reissue such work and automatically detect that it had been completed (if it was agreed that it would be worthwhile), nor that anyone independently doing such work would receive PrimeNet credit unless their B1 happened to be larger than the old one. (Thanks, James, for bringing that issue to our attention.) |
[QUOTE=Dubslow;289183]Also chalsall, for a P-1 factor, your GD formula is wrong.[/QUOTE]
Hmmm... OK. Not a super big deal in my mind, but I'll add that to my Todo list to look at in the future. |
[QUOTE=chalsall;289189]Hmmm... OK. Not a super big deal in my mind, but I'll add that to my Todo list to look at in the future.[/QUOTE]
That was actually the original reason for visiting the thread today, but then I was forcibly reminded of my post from yesterday and forgot :razz: |
My apologies to every one on this. I just realized this was happening earlier this week.
My machine with a Sandy Bridge processor was initially setup to run DC's while testing version 27.X of Prime95. At some point after submitting several successful double checks, I began doing P-1 for GPU72 on some of the cores. What I forgot to do was raise the daytime and nighttime memory from their defaults of 8 MB. I rectified this once I became aware of it. Unfortunately, all that time it had been setting B1=B2 for its P-1 jobs. Sorry again, -- Rich |
[QUOTE=diamonddave;288924]I'll check those tonight when I get home... will keep you guys posted
thx[/QUOTE] Still waiting on the last result. But I found each of those factor (again) So I guess my machine is sound and I'm just unlucky. The only cure I know for this is more TF so that I eventually break the spell. Back to work! |
[QUOTE=chalsall;289148]That's why I asked James if it might be a Prime95 configuration (or other) issue.
So you can check, your two were: 49232621,560000,560000 49235027,560000,560000[/QUOTE] OK, so I don't know exactly what happened, but the computer that ran these two had a problem. From the results.txt file: [CODE] Spool file is corrupt. Attempting to salvage data. [Tue Jan 03 21:29:05 2012] UID: flashjh/Server, M49235027 completed P-1, B1=560000, We4: F9C76734 [Wed Jan 04 05:00:23 2012] UID: flashjh/Server, M49232621 completed P-1, B1=560000, We4: F9AC6724 [Thu Jan 05 06:14:36 2012] UID: flashjh/Server, M49235293 completed P-1, B1=560000, B2=9737500, E=12, We4: F95550DE [Fri Jan 06 23:12:38 2012] [/CODE] Starting with the two that ran after the error, both ran with B1=B2. After that it has been fine. It completes a P-1 about every day with E=12. I'm sure it's something I did, but I don't know what happened. |
[QUOTE=flashjh;289220]OK, so I don't know exactly what happened, but the computer that ran these two had a problem.
[snip] I'm sure it's something I did, but I don't know what happened.[/QUOTE] Thanks for drilling down flashjh; and please don't be sure it was something you did unless you know what you did which caused this... And thanks to Dubslow for bringing this issue forward in the first place. "The most exciting phrase to hear in science, the only one that heralds new discoveries, is not 'Eureka!', but rather, 'Hmm... that’s funny...'." - Isaac Asimov The same can be said about software. Except we simple stupid humans are the creators.... :smile: |
[QUOTE=chalsall;289233]Except we simple stupid humans are the creators.... :smile:[/QUOTE]Capt. James T. Kirk: [url=http://en.memory-alpha.org/wiki/The_Changeling_%28episode%29]"I admit biological units are imperfect, but a biological unit created you."[/url]
|
mersenne.info:
How hard would it be to count exponents assigned? I'm thinking that if you added that to the [url=http://mersenne.info/exponent_status_line_graph_1/2/50000000/]graph[/url], then it'd be [i]much[/i] easier to see 'where the wave is'. Thoughts? |
[QUOTE=Dubslow;289338]How hard would it be to count exponents assigned? I'm thinking that if you added that to the [url=http://mersenne.info/exponent_status_line_graph_1/2/50000000/]graph[/url], then it'd be [i]much[/i] easier to see 'where the wave is'.[/QUOTE]
It would be [B][I][U]very[/U][/I][/B] expensive from the persepective of PrimeNet. The queries which reveal that information at the resolution Mersenne.info requires take a very long time, and put a heavy load on PrimeNet. |
Eyeballing the work distribution map, I'd say that leading edge of LL is just past 59M.
[CODE] 56000000 56105 | 35055 3 1731 12 19304 | 977 582 17771 1 | 47 1683 | 57000000 55901 | 34877 1 408 2 20613 | 7348 333 12964 | 6 390 | 58000000 55978 | 34535 1 51 4 21387 | 2303 482 11141 | 7533 49 | 59000000 55801 | 34401 6 21394 | 1624 148 95 | 18792 737 5 | 60000000 55930 | 34400 7 21523 | 240 484 4 | 18706 2090 6 | 61000000 55555 | 33886 14 21655 | 822 21 1 | 20704 107 13 | 62000000 55706 | 34136 21570 | 798 1 3 | 20724 44 |[/CODE] |
[QUOTE=axn;289372]Eyeballing the work distribution map, I'd say that leading edge of LL is just past 59M.[/QUOTE]
Actually, no. The leading edge is currently hovering around 58.3M (and is slowly backing down from there as GPU72 completes and releases work back to PrimeNet). 85 of the 95 LL assignments you see in 59M is actually Spidy. Remember that Spidy reserves the work as LL (or DC) for reasons I won't bore you with, and thus to get an accurate count of actual LL (or DC) assignments from PrimeNet you must subtract the related xxM Reserved from PrimeNet count from the GPU72 Available Assignments report. |
[QUOTE=nucleon;288824]I only have 1x GTX460 doing DCTF currently. But over the course of the next 7days or so, the rest of the farm will migrate to DCTF work.
Post the CUDA4.1 upgrade of mfaktc I was doing 1800GHzdays/day (ish). (That's 1x460GTX with DCTF, the rest LLTF). 5 GMT day average based on figures taken from [url]http://www.mersenne.org/results/[/url] Once my whole farm is fully doing DCTF, it'll be interesting to see the GHZ-days/day output.[/QUOTE] Indeed it will be interesting. Because I'm sure everyone will be interested in the results, I've added a new graph to everyone's [URL="http://www.gpu72.com/reports/worker/fc9f090d094ad7c7eff10a39caffe3a4/"]Individual reports -- GHz Days per Day[/URL]. It's still rough (or as I like to say, "Not painted yet"). For example, if someone has sparce results the X lables may skip days. And it doesn't seperate GPU from CPU work on different scales; it appears GD::Graph doesn't handle that well for cumulative bar graphs. But, overall, it gives a good idea as to what everyone is up to on a day-to-day basis. |
[QUOTE=chalsall;289371]It would be [B][I][U]very[/U][/I][/B] expensive from the persepective of PrimeNet. The queries which reveal that information at the resolution Mersenne.info requires take a very long time, and put a heavy load on PrimeNet.[/QUOTE]
Not from /assignments, but just from (say) report_exponent . If it's assigned it has a line that says "Assigned to _____ on ______" (and if it's not an LL assignment, it'll say "Assigned P-1 to ___ on ___"). |
[QUOTE=Dubslow;289378]Not from /assignments, but just from (say) report_exponent . If it's assigned it has a line that says "Assigned to _____ on ______" (and if it's not an LL assignment, it'll say "Assigned P-1 to ___ on ___").[/QUOTE]
Do you think I do not know that? At the same time, do you not think I know how expensive such queries are? Such queries involve an SQL "join" between (at least) two tables. |
[QUOTE=chalsall;289380]
Such queries involve an SQL "join" between (at least) two tables.[/QUOTE] I have no idea what that means, but point taken. |
each entry is table one is checked on each entry of table 2
|
@chalsall: man, the "work limit" is killing me, can't you set a "computing power" attribute to each user on gpu272 site? I can output almost 750 GHz-days per day if I only do TF/mfaktc. But I don't, my main work is LLDC and sometimes 1st time LL on the GPU, which does not give the "big credit" thing.
The point is that from time to time I do few days of LLTF. The reason is not only "credit oriented", it is also objective: [LIST][*] filling the GPUs with CudaLucas has the advantage that is letting the CPUs free to do other tasks (daily job, P-1, etc) but it has the disadvantage that it kills all the video power, the windozes move like in replay and there is no way to use any CAD software, so I use this mainly when my days a boring with programming jobs (or plenty of work in "text mode", word, excel, bla bla)[*]when my days are busy-boring with PCB or mechanical CAD work (or well, when I want few steps up in your top lists :smile:, if you insist), I prefer mfaktc. This is intensive killing the CPU resources, and P95 speed goes to half, but it has the advantage that - no matter how many copies I launch - it can not maximize the GPU. With 2 or 3 copies for each GPU, they'll be no more than 85-95% busy and the ~10% free is quite enough for my screen to move "reasonable" smooth in protel or acad.[/LIST]A good compromise would be to have a CudaLucas switch to limit the GPU occupancy to 85%, but I have no idea how this can be realized practically. In that case, I won't need to "go for credit" :P But as it is now, you see my problem, last 20 days I did CL only, almost zero GHz-days per day, and my "30-day average output power" decreased from 300 or 400 as it was on its maximum value 20 days ago, to almost zero. So, I reserved 100 assignments for my first card, it says I have work for 29 days, well, not yet 30, I can reserve another 100 for the second card, now I have "scheduled work for 58 days", based on my "30-days average output power", which is a big bullshit, and no way to reserve more. Everything I scheduled will be finished end of this week and I won't be around to reserve more, and I had to split each bunch of 100 in two bunches of 50, otherwise GPU 3 and 4 would stay empty. So, my question was: can't you give to each user some attribute as "daily output power"? And if so, set mine to 700 or 750 GHz-days per day, so I would be able to reserve exponents for all cards for 30 days, when I have such cards available. Most of us do not put ALL the power ALL the days into GPU272 project. But occasionally, we want to do that, and we can not, because the system considers our "average output" to low for how many reservations we want. |
[QUOTE=LaurV;289425]So, my question was: can't you give to each user some attribute as "daily output power"? And if so, set mine to 700 or 750 GHz-days per day, so I would be able to reserve exponents for all cards for 30 days, when I have such cards available. Most of us do not put ALL the power ALL the days into GPU272 project. But occasionally, we want to do that, and we can not, because the system considers our "average output" to low for how many reservations we want.[/QUOTE]
I currently don't have that ability designed into the system -- it's a simple heuristic based on past performance, with the option of exempting users from the limits test if I trust them enough. I trust you (and several others) -- you are now at a trust level of three. I.E. as of five minutes ago you can allocate as much work as you'd like. :smile: I didn't really want to have this limit sub-system at all, but a couple of users insist on trying to grab thousands of low candidates as they become available -- well beyond what they've demonstrated they can do in a month -- and only pledge to take them up one bit level. Additionally, I'm afraid of a new user (or many new users ("Slashdot effect")) suddenly showing up on the scene and reserving thousands of candidates which then don't get any work done on them, thus wasting a month's time until they auto-expire. I agree the heuristic is not optimal at the moment for users like you, and have been thinking about how to improve the intelegence of the algorithm. But for the time being (as it says on the reservation pages) just ask me if you wish to be exempted and I will (usually) oblidge. |
| All times are UTC. The time now is 06:41. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.