![]() |
[QUOTE=petrw1;411860]To be fair "airsquirrels" is actually averaging more per day (12K) than anonymous (10K) though some LLTF and some DCTF[/QUOTE]
True. In the beginning Anonymous was doing slightly more. Then Rocky built his NVidia rig for cheap. |
[QUOTE=Madpoo;411850]I'll second chalsall and suggest that you might consider doing some DC work. The spread between first and second time checks seems to be growing.
I'm only doing DC work. I'm not that concerned about finding the next prime, otherwise I'd be doing all first-time checks, but then again I do have a theory that we missed a prime between M47 and M48, lurking somewhere in that 42M-57M range. I've been trying to find the likeliest cases where LL was done wrong the first time. If you're interested in helping out with that project, I can send you a lot of work. One little project I have is to go through all of the exponents between 35M and 58M that have already been checked twice and resulted in a mismatch. There's over 5000 of them though, so it's no small undertaking and I won't complete that by myself. In most of those cases, one or the other is correct but then we'll be able to say "this other CPU had a bad result" and we may be able to point to other work from that CPU and do advance double-checking on it to find other bad results... work that hasn't been DC'd already. Unfortunately that's a lot of manual work... getting exponents and manually getting them assigned to yourself, updating worktodo files, etc. I do it with my systems but then I have some batch files setup to help out so it's not too much overhead. Anyway, if it's something you're interested in, find that other thread: [URL="http://www.mersenneforum.org/showthread.php?t=20372"]http://www.mersenneforum.org/showthread.php?t=20372[/URL][/QUOTE] I WOULD like to find the next one, would be neat, but I set all my LL workers to do 2% DC for the moment, may adjust it later. I figure that'll be effective to help DC. |
In about a month I will have cleared all the DCTF work above 58M and below 100M. I'm devoting about 500 GHz-d/d to it. There are about 400 exponents needing work.
I imagine more will come about as old high assignments are completed though. |
[QUOTE=Mark Rose;412712]In about a month I will have cleared all the DCTF work above 58M and below 100M. I'm devoting about 500 GHz-d/d to it. There are about 400 exponents needing work.[/QUOTE]
Coolness. There is not much work needed there, but it's worth the effort just to get it off the books. And, interestingly, we're only about six months away from getting all DCTF work done. Very much look forward to that. |
[QUOTE=Mark Rose;412712]In about a month I will have cleared all the DCTF work above 58M and below 100M. I'm devoting about 500 GHz-d/d to it. There are about 400 exponents needing work.
I imagine more will come about as old high assignments are completed though.[/QUOTE] Crazy! I didn't even realize that range was being looked at for DCTF. |
[QUOTE=petrw1;412717]Crazy! I didn't even realize that range was being looked at for DCTF.[/QUOTE]
I got curious last night. Thanks to George's fix a couple hours ago I was able to grab all the DCTF work in that range. It's all queued up. Of course I bet more will trickle in as old LL assignments are completed. |
My systems have unfortunately been down for the week due to some contamination in the Waterloop clogging water blocks. I hope to have that all sorted and back up tomorrow, with some upgrades :)
Work from my air cooled systems should complete soon. If Chris is happy with the new P-1 feeding strategy and willing to take the hit to LLTF I'm willing to throw a very significant amount of my firepower at DCTF and see if we can't get it off the books. |
[QUOTE=airsquirrels;412781]My systems have unfortunately been down for the week due to some contamination in the Waterloop clogging water blocks. I hope to have that all sorted and back up tomorrow, with some upgrades[/QUOTE]
This is a constant problem with water cooling. Adding acid, or even a buffer, might help. [QUOTE=airsquirrels;412781]Work from my air cooled systems should complete soon. If Chris is happy with the new P-1 feeding strategy and willing to take the hit to LLTF I'm willing to throw a very significant amount of my firepower at DCTF and see if we can't get et it off the books.[/QUOTE] It's your klt. Do whatever you want. But, what is most important right now is LLTF. We're more than a year ahead of DCTF, but only ~45 days ahead of LL. |
[QUOTE=chalsall;412788]We're more than a year ahead of DCTF, but only ~45 days ahead of LL.[/QUOTE]
If you were an Economics Major you could conclude you can run DCTF for 44 days ... HAHA |
Waiting for anonymous' weekly drop today, I hope he will not miss it (in his "history" he missed the weekly drop only once) :w00t:
|
[QUOTE=LaurV;413437]Waiting for anonymous' weekly drop today, I hope he will not miss it (in his "history" he missed the weekly drop only once) :w00t:[/QUOTE]
Anon has two week's worth of assignments, so we may have to wait another week. |
[QUOTE=Mark Rose;413465]Anon has two week's worth of assignments, so we may have to wait another week.[/QUOTE]
Well, we didn“t have to. 74 factors reported at 18:38... :w00t: |
[QUOTE=Mark Rose;413465]Anon has two week's worth of assignments, so we may have to wait another week.[/QUOTE]
One more reason why I said it. If you plan to report every day, wouldn't you take assignments for at least tow days? How about if you finish the work and run out? I always do like that, take at least a double amount of reporting range. |
I report assignments the minute they are done, but I do keep at least a one day buffer.
I don't believe Anon was keeping a buffer before. |
Update #18 - October 30, 2015
622,656 in the last month. (up yet another 30%)
29 different contributors 794 Factors found 59,688 P1/LL/DC work saved 24 contributors currently have assignments 18,310 Assignments out. 187 estimated days to completion Just over 6 months .. May 2016 ... as long as we don't lose ANONYMOUS and if AirSquirrels comes back to DCTF soon. OR we need other BIG players OR those remaining will just need a little more time. |
Everything at 70 now assigned...
Just so everyone knows, all DCTF candidates at 70 bits has now been assigned.
I have adjusted the assignment code such that if MISFIT or floop is requesting assignments, the pledge is automatically bumped to 72 if it's below that. This is to ensure automatically fed GPUTF'ers are not starved. |
I took a big chunk of DCTF assignments this morning that I will be working for the next ten days or so before switching back to LLTF. That should be taking us up close to 51M for DCTF.
I have two more systems coming online shortly that will be on LLTF, so I'm not totally abandoning that group. |
[QUOTE=airsquirrels;414413]I have two more systems coming online shortly that will be on LLTF, so I'm not totally abandoning it that group.[/QUOTE]
Thanks much! :tu: |
[QUOTE=chalsall;414411]Just so everyone knows, all DCTF candidates at 70 bits has now been assigned.
I have adjusted the assignment code such that if MISFIT or floop is requesting assignments, the pledge is automatically bumped to 72 if it's below that. This is to ensure automatically fed GPUTF'ers are not starved.[/QUOTE] OK with me, I only have 1 card on DCTF now, a HD7970, and that is there because mfakto has a penalty on it, for higher bits (the subject was much debated before). It produces ~480GHzD/D at 71, and it may go lower, like 450 or even 430 for higher bits. But those 430 you will still get, 24/7. I have no other utility for the card right now (that computer has a core2 cpu and it does nothing else, effectively. It is only powered on for DCTF). |
My 58M to 105M DCTF is bearing fruit.
The highest so far is [url=http://www.mersenne.org/report_exponent/?exp_lo=98174081&full=1]M98174081[/url]. I should have all the existing work done in about 10 days. Of course more keeps trickling in as suboptimally TF'ed assignments complete LL. It might be a worthwhile effort to TF all suboptimally TF'ed exponents while they are LL assigned to clean up. While the remaining LL would be pointless, we should still do the work before DCLL anyway. Thoughts? |
2 AWOL
Not atypical ANONYMOUS has not results reported since Oct 26 but today is a Friday so fingers crossed,
Unusual though is no results from AirSquirrels since Nov 2. Here's hoping for a couple big WHAMMMSSSS!!!!! soon |
[QUOTE=petrw1;415136]Not atypical ANONYMOUS has not results reported since Oct 26 but today is a Friday so fingers crossed,
Unusual though is no results from AirSquirrels since Nov 2. Here's hoping for a couple big WHAMMMSSSS!!!!! soon[/QUOTE] ANONYMOUS right on schedule... 1 down 1 to go. |
[QUOTE=Mark Rose;412712]In about a month I will have cleared all the DCTF work above 58M and below 100M. I'm devoting about 500 GHz-d/d to it. There are about 400 exponents needing work.
I imagine more will come about as old high assignments are completed though.[/QUOTE] To what bit level are you taking these exponents? |
[QUOTE=monst;415298]To what bit level are you taking these exponents?[/QUOTE]
58M to 66M to 73 bits 66M to 84M to 74 bits 84M to 105M to 75 bits I found a handful at 103M, so I decided to take those to 75 bits, too. I picked bit levels based on James' cutoff points graph for the [url=http://www.mersenne.ca/cudalucas.php?model=12]GTX 580[/url], which constitutes most of my TF power. One or two a day keep trickling out in the 76.9M range. They were trial factored to 73 bits. Edit: Everything below 73M is done. |
I'm having my old GTX570M running Misfit.
I don't really know what it's currently testing. Probably whatever makes sense? Still, I'm extremely annoyed that I haven't had a successful (factoring) result since months (not really active 24/7). Maybe I should redo a test with a known factor to recheck the graphics card? |
I've had as many as 400 unsuccessful attempts before finding a factor....anyway here are a few of my recent finds if you want to recheck. These are all TF 71-72 Bits
48705029 F 3752369118463113049481 48573023 F 4429321718573089848079 48391009 F 2548192211878674122353 48830011 F 3758575472036920487639 48250009 F 2637062217059701172081 |
[QUOTE=sonjohan;415539]I'm having my old GTX570M running Misfit.
I don't really know what it's currently testing. Probably whatever makes sense? Still, I'm extremely annoyed that I haven't had a successful (factoring) result since months (not really active 24/7). Maybe I should redo a test with a known factor to recheck the graphics card?[/QUOTE] You can test mfaktc.exe on the command line by running `mfaktc.exe -st` |
[QUOTE=Mark Rose;415314]58M to 66M to 73 bits
66M to 84M to 74 bits 84M to 105M to 75 bits [/QUOTE] All the DCTF between 58M and 105M is presently done. |
[QUOTE=Mark Rose;416248]All the DCTF between 58M and 105M is presently done.[/QUOTE]
Thanks Mark. I have expanded the [URL="https://www.gpu72.com/reports/current_level/"]Current Trial Factoring Depth[/URL] report such that the DCTF table now goes up to 70M. I haven't yet properly set the upper cells to be appropriately yellow, but the numbers are correct. This is updated every hour at about x:25. |
Good Job Chris!
[QUOTE=chalsall;416260]I haven't yet properly set the upper cells to be appropriately yellow[/QUOTE] Please do! My vote* goes for: - 48M and below go to 72 only (currently, some go to 73, which may be too much) - 49M up to 64M go to 73 (currently correct for 49M, other go too high) - over and including 65M go to 74 (currently too high, here I follow the LLTF table, decreasing 1; if up to me, 65M should be to 73 too) Otherwise is a bit "depressing" and people doing the "rip dctf" will leave... Or is that intended? :razz: [edit: please don't forget [URL="https://www.gpu72.com/reports/estimated_completion/primenet/"]this[/URL] too] ------------- * it is a "vote" because it depends on the system and the cards one is using; for me, those are the numbers. |
[QUOTE=LaurV;416284]Good Job Chris!
Please do! My vote* goes for: - 48M and below go to 72 only (currently, some go to 73, which may be too much) - 49M up to 64M go to 73 (currently correct for 49M, other go too high) - over and including 65M go to 74 (currently too high, here I follow the LLTF table, decreasing 1; if up to me, 65M should be to 73 too) Otherwise is a bit "depressing" and people doing the "rip dctf" will leave... Or is that intended? :razz: [edit: please don't forget [URL="https://www.gpu72.com/reports/estimated_completion/primenet/"]this[/URL] too] ------------- * it is a "vote" because it depends on the system and the cards one is using; for me, those are the numbers.[/QUOTE] Indeed. For me, I should only go to 73 bits at 52M and above, and 74 bits at 66M and above. The current level chart shows everything above 63M is already at 74 bits, so we're good there at least. Hopefully the DCTF resources will stay with the project once DCTF is done. |
Big Two awfully quiet lately ....
Maybe Friday?
|
[QUOTE=petrw1;416568]Maybe Friday?[/QUOTE]
Maybe. Hopefully... It is interesting that several "big guns" are holding their breaths (read: have assignments not reported). On the other hand, they might have lost interest. I do hope that's not the case, and they're simply keeping us all on our chair's edge for their own amusement.... |
what happens on Friday?
|
The weekend begins... :smile:
|
[QUOTE=lycorn;416648]The weekend begins... :smile:[/QUOTE]
ROFL... :smile: |
[QUOTE=dragonbud20;416646]what happens on Friday?[/QUOTE]
Anonymous has a habit of reporting enormous batches of work on Fridays. |
[QUOTE=Mark Rose;416652]Anonymous has a habit of reporting enormous batches of work on Fridays.[/QUOTE]
Too busy fighting ISIS? |
[QUOTE=0PolarBearsHere;416674]Too busy fighting ISIS?[/QUOTE]
LOL! :grin: |
[QUOTE=0PolarBearsHere;416674]Too busy fighting ISIS?[/QUOTE]
Too busy trial factoring 137,934 GHz-days in the last two weeks, it seems. Nearly 1 PHz in three months. Nuts. |
[QUOTE=Mark Rose;416652]Anonymous has a habit of reporting enormous batches of work on Fridays.[/QUOTE]
And here we go... [url]http://www.gpu72.com/reports/worker_exact/7fac3b9d4f6ca691282b08af536d6adf/[/url] |
[QUOTE=petrw1;416746]And here we go...
[url]http://www.gpu72.com/reports/worker_exact/7fac3b9d4f6ca691282b08af536d6adf/[/url][/QUOTE] Indeed! I hope AirSquirrels can return to the fray. |
Well... Is he holding back factors?
Or (I am a bit afraid) he is pushing those cards too much now. He is finding 16% less factors than expected from the previous weeks where he had more factors than expected (about 5%-7% more factors at the last report). Looking to increase in the graphs, it does not seem that he added new hardware, but he regularly increased the clock (i.e. no jump, no threshold, but uniform increase), and with a few percent of clock increase for the last 2 fifths of the graphic, it would correspond to the 20% less factors, if the last results are crappy. I hope that I am wrong, and he is not pushing those cards too much for credit or for the wish of getting rid of those DCTFs... Edit: OTOH, thinking deeper, to it, one in 73 is a pretty darn good score, and Chris' numbers for expected factors may be misleading, I objected in the past to the fact that Chris computes the expected numbers from the history, and not from the theoretical formulas. Now, which method is better, I have no freaking idea... :redface: |
The DCTF range is a little weird with expected factors anyway. If I do LLTF I get the expected amount, but I'm always under if doing DCTF. No change in hardware or clocks.
|
[QUOTE=Mark Rose;416879]If I do LLTF I get the expected amount, but I'm always under if doing DCTF. No change in hardware or clocks.[/QUOTE]
This probably has to do with almost all DCTF is done post P-1, while most LLTF is done pre P-1 (at least at the lower bit levels). The linear regression used to calculate the expected factors found is based on the aggregate results across both domains. Originally this wasn't a problem, but we've been at this for so long that DCTF is now working where LLTF originally did. When I have some time (not soon) I'll look at breaking out the LR calculations to be separate. Lastly, to speak to LaurV's comment... I also don't know which is better, theoretical or empirical. But since I tend to work in empirical domains that's my comfort zone. And, by definition, theoretical is theoretical; there's no proof (that I know of) that 1/bit level is reasonable, and this doesn't factor in (no joke intended) previous P-1 work done. |
What is the probability of a factor in the 49M-50M range between 71 and 72 bits? I did 165 tests now with no factor on a Titan Black.
I did test a single known factor in 50M between 68 and 69 bits which it found. Should I test more known factors to check that it finds them? or is this quite normal? |
[QUOTE=ATH;416908]or is this quite normal?[/QUOTE]
Quite normal. If you flip a coin 100 times and it comes up heads each time, what is the chance it will come up heads next time? 50%. Statistics is a subtle mistress... If you make a guess as to what of three answers is correct and you are shown that one of your guesses is incorrect, does it make sense to change your guess? Intuition says it makes no sense to change your guess, but in fact you are 1/3 more likely to guess correctly if you change your guess. (Why? Because you have new knowledge.) |
Yeah, I know and understand about the Monty Hall problem and at least basic statistics, but I do not remember the average probability for the factors. If it is 1% or below then 165 attempts is fine, but if it is 3+% then it is unusual.
Is it something like 1 - ln(2^71)/ln(2^72) ~ 1.39 % ? But I was actually asking about the practical facts: How many attempts does it on average take for gpu72 users in general to find a factor in this range? How many factors in 10,000 attempts for example? |
[QUOTE=chalsall;416886]When I have some time (not soon) I'll look at breaking out the LR calculations to be separate.[/quote]
Probably not worth bothering with, with DCTF expected to be done in a few months anyway. [quote] Lastly, to speak to LaurV's comment... I also don't know which is better, theoretical or empirical.[/QUOTE] What's the difference? In [i]theory[/i], there is no difference. |
[QUOTE=ATH;416910]How many factors in 10,000 attempts for example?[/QUOTE]
I've repeatedly gone 400 tests without finding a factor. Going 200 happens all the time. However, as far as I know, I've never gone 1000 tests without finding a factor. I've currently found 999 factors in 99,123 DCTF attempts, for just a hair over 1% success rate (not counting DCTF done outside of GPU72). If you look at the [url=http://www.gpu72.com/reports/factoring_cost/]Factoring Cost[/url] page, at the bottom, you can see the GPU72 stats per bit-depth for DCTF. |
Thank you, so about 1%.
400 test at 1% without success using the binomial distribution: 0.99^400 ~ 1.8% chance or 1 in every 56 times you do 400 tests. Since you did ~ 248 times 400 tests it seems resonable that you have seen it a few times. 1 test takes 22 min for me, so 100*22 min ~ 37 hours for 1 factor which saves 1 DC. I could probably do a 50M DC in slightly under that time, but that depends on your computer speed. I'm not complaining I know I am doing these test voluntarily but I'm just contemplating it. |
[QUOTE=chalsall;416909]Quite normal.
If you flip a coin 100 times and it comes up heads each time, what is the chance it will come up heads next time? 50%. Statistics is a subtle mistress... If you make a guess as to what of three answers is correct and you are shown that one of your guesses is incorrect, does it make sense to change your guess? Intuition says it makes no sense to change your guess, but in fact you are 1/3 more likely to guess correctly if you change your guess. (Why? Because you have new knowledge.)[/QUOTE] I disagree with both of these examples. In the first, if I had a coin land heads 100 times in a row I would be pretty convinced it is not a fair coin, and would assign a probability much higher than 50% that it's heads next time. In the second, we are not 1/3 more likely if we change; we are twice as likely. Our probability changes from 1/3 to 2/3, a doubling. 1/3 more likely is 4/9, as "1/3 more likely" means 4/3 as likely as we were previously. |
[QUOTE=Mark Rose;416914]I've currently found 999 factors in 99,123 DCTF attempts, for just a hair over 1% success rate (not counting DCTF done outside of GPU72). If you look at the [URL="http://www.gpu72.com/reports/factoring_cost/"]Factoring Cost[/URL] page, at the bottom, you can see the GPU72 stats per bit-depth for DCTF.[/QUOTE]
Your numbers are "in the theory", considering how much P-1 was done in the range, the score is about "one in 90", to "1 in 95" depending on DCTF range and bit, and you are somewhere at "one in a hundred". I have the same scores too. That is why I said that "1 in 73" for the Anonymous contributor is "darn good result", this is very close to "no P-1 done", which is indeed about "1 in B", where B is the bitlevel. [QUOTE=ATH;416910]Is it something like 1 - ln(2^71)/ln(2^72)[/QUOTE] Why you make it so complicate? :razz: Didn't I say this is "1 in B"? :rant: [TEX]1-\frac{\ln(2^{71})}{\ln(2^{72})}\ =\ 1-\frac{71\ \ln\ 2}{72\ \ln\ 2}\ =\ \frac{72}{72}-\frac{71}{72}\ =\ \frac{72-71}{72}\ =\ \frac{1}{72}[/TEX] If no P-1 was done for it, and you TF from x-1 to x bits, your chance to find a factor is about 1 in x tries. It does not depend on the exponent, and it only applies for "reasonable" bitlevels. For example, your chance to find a 28 bit factor for a 900M exponent is zero :razz: (why?) |
[QUOTE=LaurV;416930] For example, your chance to find a 28 bit factor for a 900M exponent is zero :razz: (why?)[/QUOTE]
Because the smallest possible factor 2*1*900M+1 is 30.7 bits. |
[QUOTE=Mark Rose;416747]Indeed!
I hope AirSquirrels can return to the fray.[/QUOTE] They are still turning in a small amount of DCTF....just nothing like before. |
[QUOTE=ATH;416969]Because the smallest possible factor 2*1*900M+1 is 30.7 bits.[/QUOTE]
Can you explain why? I don't know the answer but I would like to. |
[QUOTE=Mark Rose;416980]Can you explain why? I don't know the answer but I would like to.[/QUOTE]
You just quoted the explenation yourself. Factors of any mersenne number have to be of the form 2*k*p+1 for any positive Integer k and p as the prime exponent, so for a p=900M number the smallest possible factor is for k = 1 => 2*1*900M+1, what is just above 30 bits. The next factor candidate would be at 31 bits, then there are two factor candidates at 32 bits, 5 at 33 bits and so on. (But no factor candidate at less than 30 bits.) |
[QUOTE=manfred4;416981]You just quoted the explenation yourself. Factors of any mersenne number have to be of the form 2*k*p+1 for any positive Integer k and p as the prime exponent, so for a p=900M number the smallest possible factor is for k = 1 => 2*1*900M+1, what is just above 30 bits. The next factor candidate would be at 31 bits, then there are two factor candidates at 32 bits, 5 at 33 bits and so on. (But no factor candidate at less than 30 bits.)[/QUOTE]
Ahh, I understand now. Thank you :) |
[QUOTE=VBCurtis;416927]I disagree with both of these examples. In the first, if I had a coin land heads 100 times in a row I would be pretty convinced it is not a fair coin, and would assign a probability much higher than 50% that it's heads next time.
In the second, we are not 1/3 more likely if we change; we are twice as likely. Our probability changes from 1/3 to 2/3, a doubling. 1/3 more likely is 4/9, as "1/3 more likely" means 4/3 as likely as we were previously.[/QUOTE] OK. Sorry. Human languages are so much less accurate than pure mathematics... To your first argument, I should have said "If you flip a _fair_ coin 100 times" rather than "If you flip a coin 100 times". I believe my point still stands. To your second argument, yes, I agree. What I meant was 200% more likely from the original possibility. Again, I believe my point still stands. Specifically, that this is counter intuitive; many people still don't understand this. This might explain why many people still buy lottery tickets (a tax on those bad at maths).... |
It took 251 attempts for the first factor and now another in the 274th attempt.
So how much is left until this famous "RIP DCTF"? I'm new to the gpu72 site and have been trying to find the correct table, is this it? [url]http://www.gpu72.com/reports/available/p-1/[/url] Is that the last 78,185 candidates? That is about 800k-900k Ghz-Days? |
That shows how many are currently reserved...not all remaining.
This shows how many are left (in the white): [url]http://www.gpu72.com/reports/current_level/[/url] This shows how long it is computed to take based on recent results (30 days): [url]http://www.gpu72.com/reports/current_level/[/url] |
[QUOTE=ATH;417156]It took 251 attempts for the first factor and now another in the 274th attempt.[/QUOTE]
The last year or so of DCTF and LLTF: 1 year with my first GPU; 4 months with 2 GPUs [CODE]38137 Total Tests 398 Factors 95.82 Ratio[/CODE] |
Thank you, yeah again slightly over 1% factors.
So we are only doing those in white right, adding those manual I get: 70bit 21,300 71bit 56,887 72bit 87,518 73bit 24,312 total 190,017 So 190k assignments which includes those 78k already assigned. |
[QUOTE=ATH;417161]Thank you, yeah again slightly over 1% factors.
So we are only doing those in white right, adding those manual I get: 70bit 21,300 71bit 56,887 72bit 87,518 73bit 24,312 total 190,017 So 190k assignments which includes those 78k already assigned.[/QUOTE] Seems about right |
[QUOTE=ATH;417156]It took 251 attempts for the first factor and now another in the 274th attempt.[/QUOTE]
This is one of the issues we face. You appear to be [URL="https://www.gpu72.com/reports/worker/d218327bdbd14bbbf0a7384e418fe43f/"]unlucky[/URL]. This sometimes happens. Particularly in small sample sets. Please trust and understand that this is not directed. If you run an infinite number of tests you will reach nominal. |
[QUOTE=chalsall;417169]This is one of the issues we face. You appear to be [URL="https://www.gpu72.com/reports/worker/d218327bdbd14bbbf0a7384e418fe43f/"]unlucky[/URL].[/QUOTE]
Yeah I know, this was more of a status update after my question earlier at 165 tests with no factor. Several people have informed me it is just over 1% factors in the long run. |
[QUOTE=petrw1;417158]That shows how many are currently reserved...not all remaining.
This shows how many are left (in the white): [URL]http://www.gpu72.com/reports/current_level/[/URL] This shows how long it is computed to take based on recent results (30 days):[STRIKE] [URL]http://www.gpu72.com/reports/current_level/[/URL][/STRIKE][/QUOTE] [url]http://www.gpu72.com/reports/estimated_completion/primenet/[/url] fixed it for you - I wondered why ATH had to do the calculus again, when it was on the page, it turned out your link was wrong :razz: |
Update #19 - December 1, 2015
676,105 in the last month. (up another few %)
27 different contributors 812 Factors found 69,771 P1/LL/DC work saved 21 contributors currently have assignments 46,657 Assignments out. (way up....) 137 estimated days to completion Just over 4 months .. April 16, 2016 ...Pick up the pace just a bit (save 13 more days) and it can be my birthday present. |
[QUOTE=petrw1;417900]...Pick up the pace just a bit (save 13 more days) and it can be my birthday present.[/QUOTE]
The way [URL="https://www.gpu72.com/reports/worker_exact/fc17ac967304842d91106208c22430de/"]things are going[/URL] I think you might get an _early_ birthday present.... :smile: |
[QUOTE=chalsall;417906]The way [URL="https://www.gpu72.com/reports/worker_exact/fc17ac967304842d91106208c22430de/"]things are going[/URL] I think you might get an _early_ birthday present.... :smile:[/QUOTE]
Only 20 THz? Weak ;) |
ANON Right on Schedule :)
Almost 140,000 GhzDays.
And he's not done...took his ext big batch. Almost everything at 71 bits. |
[QUOTE=petrw1;418270]And he's not done...took his ext big batch. Almost everything at 71 bits.[/QUOTE]
Holy cow!!! :smile: Just so everyone knows, MISFIT et al fetching spiders are now automatically pledged to 73, since nothing below 72 is available any longer. |
I prioritized some work that was assigned to currently-offline systems to start wrapping up the DCTF range.
This morning I closed off 41M, 43 and 45M should be coming in a few hours, then I'm going to prioritize all of the 70 and 71 bit work I have. |
[QUOTE=airsquirrels;418350]This morning I closed off 41M, 43 and 45M should be coming in a few hours, then I'm going to prioritize all of the 70 and 71 bit work I have.[/QUOTE]
Super coolness. Thanks! :smile: |
It looks like there's one exponent stuck in 45M, and it's not mine. Is there an easy way to see who has it?
|
[QUOTE=airsquirrels;418459]It looks like there's one exponent stuck in 45M, and it's not mine. Is there an easy way to see who has it?[/QUOTE]
It was assigned to [URL="https://www.gpu72.com/reports/worker/9b8a3dec97003d833ef3bde535cd6302/"]RobChurchEngineer[/URL] 2015-10-28. Said user has only completed a total of four candidates. I've moved the assignment over to you. Factor=45245813,71,72 |
Done. RIP 45M
On to 50 |
[QUOTE=airsquirrels;418464]Done. RIP 45M. On to 50[/QUOTE]
Sweet. |
46M is done. Working on 47, 48, and 49
|
[QUOTE=airsquirrels;419162]46M is done. Working on 47, 48, and 49[/QUOTE]
Excellent! Thanks! The way things are going, we could have DCTF completed in about two months or so. The great thing about this is when the next Mersenne Prime is found and we get a huge influx of new users, most will be first assigned DC'ing work until they've proven themselves. Very *very* few will actually complete, but those who do will have worked candidates already appropriately TF'ed. |
[QUOTE=chalsall;419165]when the next Mersenne Prime is found[/QUOTE]
You have much optimism, grasshopper :razz: |
Another little update:
Exponents between 58M and 105M with a completed LL test are DCTF'ed to an appropriate level for a GTX 580. I found 6 factors. Exponents between 58M and 91M with an assigned LL test are TF'ed to an appropriate level for a GTX 580. I found a few dozen that were assigned for P-1/LL in September and October with no P-1/low TF, probably when we ran out of exponents when a couple users requested big batches. I should have everything up to 105M done by the end of the week. So far I found [url=http://www.mersenne.org/report_exponent/?exp_lo=76906121&full=1]one factor[/url]. I am doing this work to avoid more of the other category of work in the coming months. Overall, about 20 THz-days of work. I'm pleased there was so little as that shows the system with GPU72 is working very well. |
[QUOTE=chalsall;419165]Excellent! Thanks! The way things are going, we could have DCTF completed in about two months or so.
The great thing about this is when the next Mersenne Prime is found and we get a huge influx of new users, most will be first assigned DC'ing work until they've proven themselves. Very *very* few will actually complete, but those who do will have worked candidates already appropriately TF'ed.[/QUOTE] I think that's how I heard of the project, when M6972593 was discovered. I downloaded the project, and I do believe I completed an anonymous DC assignment. It was boring though. I think what the project needs is graphing or charts showing the completion to the next milestone. Kind of like what GPU72 has for exponents. GPU72 shows how much is done, and gives mini-milestones by the subcategorisation of work into individual table cells. It's easier to see progress with GPU72. Perhaps instead of just giving a count milestone, plot the daily values so people can "see" progress? |
A post for the wish thread:
I wish that DCTF project would never die... (i.e. we would never be able to finish it) [COLOR=White](this is serious, and before throwing the stone, think about the meaning of that)[/COLOR] |
[QUOTE=LaurV;419208]I wish that DCTF project would never die... (i.e. we would never be able to finish it)[/QUOTE]
I don't get this... It would mean that the LLTF project isn't working correctly. I know this was a joke; I'm probably just being slow.... :smile: |
[QUOTE=Mark Rose;419167]I found a few dozen that were assigned for P-1/LL in September and October with no P-1/low TF, probably when we ran out of exponents when a couple users requested big batches. I should have everything up to 105M done by the end of the week.[/QUOTE]
Thanks for doing this. And yes, a few candidates "slip through"; I'm not entirely sure why. Certainly there have been cases where users request a huge number of candidates (often manually, and then often never completing them). Thankfully we're slowly building up enough of a buffer in all the different categories such that this is becoming less of an issue. In other cases it seems like people request specific candidates not yet appropriately TF'ed / P-1'ed. Again, I have no idea why. |
[QUOTE=chalsall;419222]Thanks for doing this.
And yes, a few candidates "slip through"; I'm not entirely sure why. Certainly there have been cases where users request a huge number of candidates (often manually, and then often never completing them). Thankfully we're slowly building up enough of a buffer in all the different categories such that this is becoming less of an issue. In other cases it seems like people request specific candidates not yet appropriately TF'ed / P-1'ed. Again, I have no idea why.[/QUOTE] Yeah, there were a bunch in the 76.9M and range where it appears the exponents ran out. The exponents above 80M are all look like people doing work ahead of the waves for whatever reason. There's actually very little of it. I think it would be neat if we could collapse the LLTF work into a single front. I reckon if all the people doing DCTF were to work on LLTF for a few months we could accomplish that. |
Well, give us another few weeks to remove the DCTF front and we can do that :)
Maybe the people responsible for the mfakto kernel tuning will squeeze some more out of the 74,75 range performance. My GhzDay/Day output on those is definitely not as nice as 71->72. |
[QUOTE=airsquirrels;419225]Well, give us another few weeks to remove the DCTF front and we can do that :)[/QUOTE]
Sweet. [QUOTE=airsquirrels;419225]Maybe the people responsible for the mfakto kernel tuning will squeeze some more out of the 74,75 range performance. My GhzDay/Day output on those is definitely not as nice as 71->72.[/QUOTE] That might be problematic... Nvidia's GPUs are very different from AMD's GPUs. Both are very good, but they do things in different ways. Edit: I finally understand LaurV's wish. |
[QUOTE=chalsall;419221]I know this was a joke; I'm probably just being slow.... :smile:[/QUOTE]
You are not, but you didn't read the white text... :razz: Yes, indeed, you understood the joke/wish at the end: what "re-launched" the DCTF project was bringing the GPUs into equation. Otherwise, they were "dead", the right bitlevel was factorized with P95/CPU before attempting the first LL test. Only apparition of the hardware able to TF 200-300 times faster made reviving TF work profitable, for exponents already TF+LL-ed with the CPU. So, assuming we get in the next 2-3 months (before DCTF finishes) some hardware able to TF few more bits deeper, i.e. 4-8 times faster at TF work than the hardware we currently have, then we will have a [U]new wave[/U] of DCTF for the exponents currently LL-ed (in 75M and below). My wish that "DC never dies" was just wishing for such hardware. There is no other way to "keep DCTF alive", unless we continuously get better hardware. |
[QUOTE=LaurV;419232]You are not, but you didn't read the white text... :razz:[/QUOTE]
I did actually read the white text. It was included in the quoted text in the HTML editer. Are you suggesting we go deeper into DCTF? You've previously argued that we shouldn't, but I would be happy to facilitate that happening. Please advise. |
Well, I guess we should get around to developing TFing ASICs for dedicated bit depth ranges :)
Let's convince the various crypto-currency folks to mine for primes... |
[QUOTE=airsquirrels;419236]Let's convince the various crypto-currency folks to mine for primes...[/QUOTE]
Profit... |
[QUOTE=airsquirrels;419236]Well, I guess we should get around to developing TFing ASICs for dedicated bit depth ranges :)
Let's convince the various crypto-currency folks to mine for primes...[/QUOTE] There's [URL="https://en.wikipedia.org/wiki/Primecoin"]primecoin[/URL] for that :smile: |
47M is closed.
Working forward towards 50 and the last of the candidates at 70 bit. |
Exponents between 58M and 105M with an active or completed LL test are now DCTF'ed to an appropriate level for a GTX 580.
As mentioned in the other thread, I'm now looking for any skipped TF work, for exponents from 34.8M to 105M with no factors. |
Is continuing this project a good idea? I think that the point has been raised that a number of older cards stop being efficient at higher bits.
If this gets finished off, will we have a number of cards poorly allocated? Would it make more sense to shift some of this power to other areas and preserve this so people can efficiently use their older cards? People are always free to do what they will, but I think the topic deserves some discussion. |
| All times are UTC. The time now is 08:00. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.