![]() |
Sunday Night Wrapup #8 - February 1, 2015
33,089 DCTF in the last week. ... less than 30% of update #1
63 different contributors 55 Factors found 3,284 P1/LL/DC work saved 16 contributors currently have assignments 973 Assignments out. (11% of update #1) 926.4635 estimated days to completion (almost 4 times longer than update #1) August 16, 2017 Because of the need to help out more in LLTF many of us have moved our work there. Until (if ever) LLTF is far enough ahead that we can refocus here I will reduce to monthly updates here. |
Correct me if I'm wrong but .... I don' think this is the end of it
Currently GPU72 lists exponents below 60M that need DCTF (that is LL done DC NOT done and TF level deemed too low).
Are there not, though exponents above 60M that have LL but NOT DC that also should be TF'd higher before DC is done? |
[QUOTE=petrw1;394261]Are there not, though exponents above 60M that have LL but NOT DC that also should be TF'd higher before DC is done?[/QUOTE]
Yes, but I think Chris isn't worried about those since we are many years away from DC'ing those exponents. I guess it would make sense for the [url=https://www.gpu72.com/reports/estimated_completion/primenet/]Estimated Days to Completion PrimeNet[/url] page though. |
[QUOTE=Mark Rose;394271]Yes, but I think Chris isn't worried about those since we are many years away from DC'ing those exponents.[/QUOTE]
Many, MANY years. |
[QUOTE=petrw1;394261]Currently GPU72 lists exponents below 60M that need DCTF (that is LL done DC NOT done and TF level deemed too low).
Are there not, though exponents above 60M that have LL but NOT DC that also should be TF'd higher before DC is done?[/QUOTE] Actually... hm... no, they are not. All LL done since GPU72's "inception" was done high enough (74 bits, and very seldom we released 73 bits expos due to high LL wave pressure). Very few exceptions are exponents over 60M that were LL _before_ the GPU72 era, and were not enough TF-ed (remember, some users, myself included, _did_ TF by themselves for exponents they reserved for LL, "just in case", to higher bits, long before GPU72, remember the "lists" we used to keep on the forum or else, manual work done, etc - in fact that effort gave birth to GPU72, together with Chris' enthusiasm). The few exceptions, most probably some of us with few beefy cards could go through them in few days, when their time will come (however let me doubt about the "years" part). |
How do yo guys figure out how many "days" we are ahead of the wave?
|
[QUOTE=tigreroars;396512]How do yo guys figure out how many "days" we are ahead of the wave?[/QUOTE]
It's impossible to be exact, but basically we take a look at how many candidates are "appropriately" TF'ed vs. how many are LL'ed / DC'ed per day for the ranges. As an example, in the DC range over the last month approximately 165 candidates were DC'ed a day. At the moment, approximately 8,000 candidates are "ready" ahead of the Cat 4 "wave". So, we're ~48 days ahead if all DCTF'ing was to stop. Then consider that we're currently doing about 40 DCTFs to 72 (in 40M) a day, so if things continue as they are we're ~64 days ahead. |
Sunday Night Wrapup #9 - March 1, 2015
90,719 in the last month.
69 different contributors 196 Factors found 12,564 P1/LL/DC work saved 15 contributors currently have assignments 1,455 Assignments out. 2,458.5887 estimated days to completion (12 times as long as the lowest it ever was ... under 200 days) 6.73 years. |
[QUOTE=petrw1;396698]2,458.5887 estimated days to completion (12 times as long as the lowest it ever was ... under 200 days)
6.73 years.[/QUOTE] To be fair, that was before adding factoring beyond 71 bits. For the work below 71 bits, we're at 467 days. I know my time with GIMPS will come to an end at some point, basically as soon as I no longer have free electricity, which is very likely to happen in two years or less. It has never been about finding the actual primes for me. The fun is participating in a distributed computing project. Trial factoring is fun. That being said, I still have the desire to "finish" something. I would do all the work below 70, but LaurV and possibly others have AMD cards that work better in that range, so finishing the DCTF work below 71 is my plan. I might be able to slide into 1st place on the DCTF charts by the end of summer, assuming the really big guns like LaurV and NickOfTime don't throw everything at DCTF :) |
[QUOTE=Mark Rose;396747]To be fair, that was before adding factoring beyond 71 bits. For the work below 71 bits, we're at 467 days.)[/QUOTE]
I stand corrected.....we were as low as 388 shortly after the extra bit level was added. |
[QUOTE=chalsall;396542]
[snip] Then consider that we're currently doing about 40 DCTFs to 72 (in 40M) a day, [snip][/QUOTE] You sure about that? My GTX-970 is doing 32 a day by itself - about one every 44 minutes |
[QUOTE=Gordon;396768]You sure about that? My GTX-970 is doing 32 a day by itself - about one every 44 minutes[/QUOTE]
I am often wrong! :smile: I was eye-balling it from [URL="https://www.gpu72.com/graphs/dctf/month/"]this graph[/URL]. If you're doing the work through GPU72, then I have a bug. If I do, please PM me your GPU72 username or Display Name and I'll drill down. |
[QUOTE=chalsall;396773]I am often wrong! :smile: I was eye-balling it from [URL="https://www.gpu72.com/graphs/dctf/month/"]this graph[/URL].
If you're doing the work through GPU72, then I have a bug. If I do, please PM me your GPU72 username or Display Name and I'll drill down.[/QUOTE] PM sent |
[QUOTE=Gordon;396774]PM sent[/QUOTE]
OK, you're not wrong; nor am I. It appears you are, by far, the most productive DCTF'er to 72 currently! Thanks! :smile: |
[QUOTE=chalsall;396776]OK, you're not wrong; nor am I. It appears you are, by far, the most productive DCTF'er to 72 currently! Thanks! :smile:[/QUOTE]
When you add on the 14 that the GTX-660 also churns through that's 46 a day..isn't anyone else doing any? |
[QUOTE=Gordon;396801]When you add on the 14 that the GTX-660 also churns through that's 46 a day..isn't anyone else doing any?[/QUOTE]
A few others are, but you're by far the largest producer (to 72) at the moment. Keep in mind also that when I do the "Days Ahead" projections, I use the average production over the last 30 days; over the last five days we've averaged 58.2 a day. |
[QUOTE=Gordon;396801]When you add on the 14 that the GTX-660 also churns through that's 46 a day..isn't anyone else doing any?[/QUOTE]
You can roughly see who is doing what by looking at the [url=http://www.gpu72.com/reports/workers/dctf/]DCTF Workers' Progress[/url] page. Divide the GHz-d of work by the number of assignments. Roughly, 3 is to 70, 6 is to 71, and 12 is to 72. |
[QUOTE=Mark Rose;396803]You can roughly see who is doing what by looking at the [url=http://www.gpu72.com/reports/workers/dctf/]DCTF Workers' Progress[/url] page.[/QUOTE]
Thanks for pointing that out Mark. Also, another useful report is [URL="http://www.gpu72.com/reports/workers/dctf/week/"]the DCTF Worker's Progress over the last Week[/URL]. BTW Gordon, you're Factors Found vs Attempts ratio is a little low (but not beyond reasonableness). Probably just bad luck (someone else getting your factors! :wink:), but have you run the mfaktc full self-test on your cards recently, just as a precaution? |
[QUOTE=chalsall;396806]Thanks for pointing that out Mark. Also, another useful report is [URL="http://www.gpu72.com/reports/workers/dctf/week/"]the DCTF Worker's Progress over the last Week[/URL].
BTW Gordon, you're Factors Found vs Attempts ratio is a little low (but not beyond reasonableness). Probably just bad luck (someone else getting your factors! :wink:), but have you run the mfaktc full self-test on your cards recently, just as a precaution?[/QUOTE] These are the 2 I use for my weekly/monthly progress reports |
[QUOTE=chalsall;396806]Thanks for pointing that out Mark. Also, another useful report is [URL="http://www.gpu72.com/reports/workers/dctf/week/"]the DCTF Worker's Progress over the last Week[/URL].[/QUOTE]
Yet another page I was unaware of :) |
[QUOTE=chalsall;396806]Thanks for pointing that out Mark. Also, another useful report is [URL="http://www.gpu72.com/reports/workers/dctf/week/"]the DCTF Worker's Progress over the last Week[/URL].
BTW Gordon, you're Factors Found vs Attempts ratio is a little low (but not beyond reasonableness). Probably just bad luck (someone else getting your factors! :wink:), but have you run the mfaktc full self-test on your cards recently, just as a precaution?[/QUOTE] Since I switched to LL-TF to 75 bits I have a hit ratio of 3 / 282. Seems low but is it unreasonable? |
[QUOTE=petrw1;396814]Since I switched to LL-TF to 75 bits I have a hit ratio of 3 / 282.
Seems low but is it unreasonable?[/QUOTE] Well, mine is even poorer with 2/625 at TF 75... 8/375 TF 74 |
[QUOTE=chalsall;396806]Thanks for pointing that out Mark. Also, another useful report is [URL="http://www.gpu72.com/reports/workers/dctf/week/"]the DCTF Worker's Progress over the last Week[/URL].
BTW Gordon, you're Factors Found vs Attempts ratio is a little low (but not beyond reasonableness). Probably just bad luck (someone else getting your factors! :wink:), but have you run the mfaktc full self-test on your cards recently, just as a precaution?[/QUOTE] Those results are a mixture from 3 different cards Palit GTX-660 Gigabyte GTX-660 Gigabyte GTX-970 Just out of curiosity I am running a -ST2 test on the 970 right now. I spent a couple of weeks recently running factoring on exponents up in the 970m+ range and was finding a factor roughly every 60 or so tests, took the 660 7 seconds to go from 66-67 bits. Didn't I read somewhere on here that as bit depth increases, odds of finding a factor decrease? Or is my memory playing up again... |
1 Attachment(s)
[QUOTE=Gordon;396841]Those results are a mixture from 3 different cards
Palit GTX-660 Gigabyte GTX-660 Gigabyte GTX-970 Just out of curiosity I am running a -ST2 test on the 970 right now. I spent a couple of weeks recently running factoring on exponents up in the 970m+ range and was finding a factor roughly every 60 or so tests, took the 660 7 seconds to go from 66-67 bits. Didn't I read somewhere on here that as bit depth increases, odds of finding a factor decrease? Or is my memory playing up again...[/QUOTE] Results from the -ST2 test on the 970 |
[QUOTE=NickOfTime;396823]Well, mine is even poorer with 2/625 at TF 75...
8/375 TF 74[/QUOTE] What?!?!?! Something is wrong there (or you're being exceptionally unlucky). Even after a P-1 run, you should still see something like 1/85 to 1/90 or so from 74 to 75. |
[QUOTE=Gordon;396841]Didn't I read somewhere on here that as bit depth increases, odds of finding a factor decrease? Or is my memory playing up again...[/QUOTE]
You are correct. A "back of the envelope guestimate" often used around here is ~ 1/ [next bit level]. Probability is slightly lower if a P-1 has already run. Thanks for running the self-test. Clearly that card is good. |
Quick empirical data...
Just a quick query against the GPU72 database wrt 74 to 75 TF'ing.
6,745 runs, 83 factors found. ~ 1 / 81.3. Most of these were done after a P-1 run. I was always taught to never ignore things which make you go "Hmmmm... That's strange...". Often leads nowhere; sometimes leads to places important. |
[QUOTE=chalsall;396853]You are correct. A "back of the envelope guestimate" often used around here is ~ 1/ [next bit level]. Probability is slightly lower if a P-1 has already run.
Thanks for running the self-test. Clearly that card is good.[/QUOTE] GTX-660 also passed all 20,262 self tests |
Where is this self-test? I'm 0 for ~230 on 75 bits
|
1 Attachment(s)
We worry (a lot) about the possibility of some sort of error causing our cards to miss factors.
One thing we are monitoring is the GHz-days to find a factor. For each higher bit level it should (?) take twice as many GHz-days, right? Note in the image below that a factor at 70 bits takes 210.6 GHz-days. Then at 71 bits it takes 361.8 GHz-days. Then at 72 bits it takes 796.3 GHz-days. Then at 73 bits it takes 1,666.9 GHz-days. And finally at 74 bits it takes 1,901.9 GHz-days. So ~200/~400/~800/~1,600/~1,900 means that we are doing better than expected on the 74 bit work? (We could be wrong!) :max: |
1 Attachment(s)
It looks like the doubling of GHz-days applies to DC TF work as well. (Roughly, of course!)
85.8/173.9/259.8/568.3 :mike: |
[QUOTE=TheMawn;396860]Where is this self-test? I'm 0 for ~230 on 75 bits[/QUOTE]
mfaktc -st mfaktc -st2 mfaktc -h |
[QUOTE=Xyzzy;396870]It looks like the doubling of GHz-days applies to DC TF work as well. (Roughly, of course!)
85.8/173.9/259.8/568.3 :mike:[/QUOTE] Yes. This doubling is rough, but a result of two bits of math: Each bit level is twice as big as the one before it, so it takes twice as long to check the next bit level; second, chance to find a factor is roughly 1/bitdepth per bit. So, each higher bit is slightly less likely to find a factor, while taking twice as long. P-1 tests find some factors that you "would have found", so the actual results are less than 1/75 for 74-75 bits in practice. Of course, the P-1 effect is roughly the same for 73-74 and 74-75, so the doubling of Ghz-days per factor should still be seen in the data. |
[QUOTE=VBCurtis;396877]Of course, the P-1 effect is roughly the same for 73-74 and 74-75, so the doubling of Ghz-days per factor should still be seen in the data.[/QUOTE]
P-1 should become less effective for larger numbers. Just how large a drop it is from 73-74 to 74-75, I don't know. Certainly not enough to significantly affect the "doubling" phenomenon. In fact, the direction of this effect compensates for the drop in TF probability. |
[QUOTE=kladner;396875]mfaktc -st
mfaktc -st2 mfaktc -h[/QUOTE] If you have more than one card you still need the -d n option to tell it which card to test |
[QUOTE=Gordon;396884]If you have more than one card you still need the -d n option to tell it which card to test[/QUOTE]
Oops! Overlooked that. |
[QUOTE=chalsall;396852]What?!?!?! Something is wrong there (or you're being exceptionally unlucky).
Even after a P-1 run, you should still see something like 1/85 to 1/90 or so from 74 to 75.[/QUOTE] -st2 on one of the 290x passed 335,478 tests. |
No problem. I passed all self tests also. I guess I'm just very unlucky.
|
[QUOTE=NickOfTime;396911]-st2 on one of the 290x passed 335,478 tests.[/QUOTE]
Hmmm... Interesting. OK, well, since statistics has no memory let's expect and hope that your future results begin being more nominal. |
[QUOTE=NickOfTime;396911]-st2 on one of the 290x passed 335,478 tests.[/QUOTE]
May I ask you two things? Have you run the self test on all your cards? Separately, how many of your cards are AMD vs. NVIDIA? I am asking this only because the AMD and NVIDIA architectures are rather seriously different. And, your stats are well outside being "unlucky". To be clear, this is simply a "hmmmm..." thing. |
[QUOTE=chalsall;396931]May I ask you two things? Have you run the self test on all your cards? Separately, how many of your cards are AMD vs. NVIDIA?
I am asking this only because the AMD and NVIDIA architectures are rather seriously different. And, your stats are well outside being "unlucky". To be clear, this is simply a "hmmmm..." thing.[/QUOTE] 22% ghz nvidia. Well guess it's just bad luck, I just tested each 290x with a found factor TF75 and each of them found a factor, 71572651 66328541 72244427 71455331. |
[QUOTE=NickOfTime;396967]Well guess it's just bad luck, I just tested each 290x with a found factor TF75 and each of them found a factor, 71572651 66328541 72244427 71455331.[/QUOTE]
Thanks for running those tests! Encouraging. :smile: |
Keep in mind that -st/-st2 is a selftest for the software, not for the hardware!
Oliver |
[QUOTE=TheJudger;397016]Keep in mind that -st/-st2 is a selftest for the software, not for the hardware!
Oliver[/QUOTE] Completely sure about that? We know the software works correctly, it was verified before release. so any testing in my computer is a test of the hardware... |
[QUOTE=Gordon;397127]Completely sure about that? We know the software works correctly, it was verified before release. so any testing in my computer is a test of the hardware...[/QUOTE]
Just because the software works on his card/driver/cuda version doesn't mean the same for yours. The tests are precisely for figuring out which of those combinations the software *does* work for. It happens to include the author's cards, as well as yours. |
[QUOTE=Dubslow;397128]Just because the software works on his card/driver/cuda version doesn't mean the same for yours. The tests are precisely for figuring out which of those combinations the software *does* work for. It happens to include the author's cards, as well as yours.[/QUOTE]
So success is a good sign, but not proof of anything hardware related? It's more the combination of drivers, versions, and compatibility? It still seems that hardware errors [I]could[/I] cause a failure, even if everything else was in agreement. |
[QUOTE=kladner;397146]So success is a good sign, but not proof of anything hardware related? It's more the combination of drivers, versions, and compatibility?
It still seems that hardware errors [I]could[/I] cause a failure, even if everything else was in agreement.[/QUOTE] Of course it is hardware related, otherwise you can say that if you successfully ran Prime95 all you've proved is your combo of os & drivers "works". Which when you look at it that way is a nonsense... |
[QUOTE=Gordon;397147]Of course it is hardware related, otherwise you can say that if you successfully ran Prime95 all you've proved is your combo of os & drivers "works". Which when you look at it that way is a nonsense...[/QUOTE]
There is one sense in which Oliver's statement makes sense. The self test is not designed to [B]stress[/B] the hardware, i.e. induce flaky hardware to cause errors. If the hardware does error out, it will show up during self-test, but you can't say it failed due to hardware, driver, OS or the program itself. Passing the self test will give you confidence that program/OS/driver combination is fine, and that the hardware is [B]minimally[/B] stable. Keep in mind that there is a second layer of dynamic compilation that happens with GPU programs, where the GPU code is compiled on the fly by the GPU driver. And GPUs can also face sporadic errors due to OS turning off the display, driver bugs and any number of other issues. So your comparison with Prime95 is not applicable. |
Now that the assignment rules have been altered, how far ahead are we for both DC and LL with the new values?
|
[QUOTE=tha;397200]Now that the assignment rules have been altered, how far ahead are we for both DC and LL with the new values?[/QUOTE]
For DC we're currently ~49 days ahead. This really hasn't changed as the Cat 4 offset didn't move. For LL we're currently about 7 days ahead of the LL Cat 3 wave, and about 9 days ahead of the LL Cat 4 wave. Really the issue at the moment is keeping the P-1'ers fed at at least 74 bits (mostly in the Cat 4 range). |
[QUOTE=axn;397150]There is one sense in which Oliver's statement makes sense. The self test is not designed to [B]stress[/B] the hardware, i.e. induce flaky hardware to cause errors. If the hardware does error out, it will show up during self-test, but you can't say it failed due to hardware, driver, OS or the program itself. Passing the self test will give you confidence that program/OS/driver combination is fine, and that the hardware is [B]minimally[/B] stable.[/QUOTE]
To put on the table... Oliver a few years ago gave me a G580. I was very thankful, and immediately began running code on it. (I still do, for sensitive data.) Weirdly, some code worked perfectly; others crashed (or returned bad results) within seconds. owftheevil then pointed me to his GPU memory test program, which showed the card had a memory issue (even at factory clocks). He then advised how to down-clock the memory on a GPU by way of a BIOS flash (because the Linux NVIDIA drivers didn't allow changes from user space at that time). Perhaps we, as a community, should generate a set of tests for GPUs which are as good as Prime95 for stressing CPU hardware? |
[QUOTE=chalsall;397487]To put on the table... Oliver a few years ago gave me a G580.[/QUOTE]
Awww.. Damn. It was Jerry who gave me the G580. Sorry. Memory is such a subjective thing.... |
[QUOTE=chalsall;397487]Perhaps we, as a community, should generate a set of tests for GPUs which are as good as Prime95 for stressing CPU hardware?[/QUOTE]
While I think this is a fantastic idea, I don't know how feasible it will be. My GPUs have survived extensive TF at clocks that die before getting into the main menu of a 3D game. I know CUDALucas is a killer for memory, at least. I don't know how a GPU's cache behaves (versus that of a CPU) so the "small FFT" test may not do the same thing. |
Wrap-up #10 (Month ending April 6 ... a few days late)
110,873 in the last month.
36 different contributors 244 Factors found 16,639 P1/LL/DC work saved 15 contributors currently have assignments 2,115 Assignments out. 1688.2721 estimated days to completion (a new resurgence) 4.62 years. |
Update #11
104,325 in the last month.
37 different contributors 190 Factors found 12,236 P1/LL/DC work saved 16 contributors currently have assignments 1,543 Assignments out. 1838.7561 estimated days to completion (a new resurgence) 5.03 years. |
Update #12 - June 1, 2015
125,994 in the last month.
25 different contributors 226 Factors found 16,934 P1/LL/DC work saved 24 contributors currently have assignments 7,169 Assignments out. 1354 estimated days to completion (DCTF Churners request dropped this) 3.71 years. |
Update #13 - July 2, 2015
192,272 in the last month.
26 different contributors 327 Factors found 26.084 P1/LL/DC work saved 20 contributors currently have assignments 2,158 Assignments out. 853.9771 estimated days to completion (Renewed interest) 2.33 years. |
Update #14 August 12, 2015....Damn vacations!!!!
127,820 in the last month.
22 different contributors 267 Factors found 21,777 P1/LL/DC work saved 15 contributors currently have assignments 4,201 Assignments out. 1230 estimated days to completion (Resources diverted to Strategic Double Checks ... TF for it almost done) 3.37 years |
No resource diversion, actually. SDCTF is a subset of DCTF. SDCTF has simply been prioritized.
|
[QUOTE=Mark Rose;407748]No resource diversion, actually. SDCTF is a subset of DCTF. SDCTF has simply been prioritized.[/QUOTE]
Fair enough....but then SDCTF increased the total remaining DCTF workload. |
[QUOTE=petrw1;407756]Fair enough....but then SDCTF increased the total remaining DCTF workload.[/QUOTE]
No it didn't. The only change was the order the required TF'ing was done in. |
[QUOTE=chalsall;407762]No it didn't.
The only change was the order the required TF'ing was done in.[/QUOTE] Hmmmm....so I am standing here; bases loaded; bottom of the 9th; man on third; we are 1 down and now I have 2 strikes. |
[QUOTE=petrw1;407770]Hmmmm....so I am standing here; bases loaded; bottom of the 9th; man on third; we are 1 down and now I have 2 strikes.[/QUOTE]
That's not how we keep score around here. :smile: |
Down to 1.48 years... this is getting interesting...
|
I should be ramping up about another 25% in raw hardware capacity over the next few days, but as the worlds stock of Fury X cards seems to be nearly exhausted I'm stuck waiting for a mid-september delivery of the last shipment. Power and cooling seems to be stable at this point (Both racks are on a large liquid cooling loop).
I am still experiencing the PCIe link speed inefficiencies ([url]http://www.mersenneforum.org/showthread.php?t=15646&page=121[/url]) across several of my systems so if I can ultimately resolve that I will gain another 20% or so. Some of my systems are 8 GPU servers so I may begin experimenting with a custom fork/port of mfakto that treats all of the homogenous GPUs as one large resource to reduce inefficiencies. If I go down that route it's likely going to require some down time until those changes are well vetted/tested with all of the target kernels. I believe at full capacity we can all collectively get DCTF done by Spring, depending on if you guys want me to shift any capacity to provide burst assist on the LL front in the mean time. At this time I'm committed to completion of the DCTF work, but consider my systems a resource of the project for whatever is important. |
[QUOTE=airsquirrels;408869]Some of my systems are 8 GPU servers so I may begin experimenting with a custom fork/port of mfakto that treats all of the homogenous GPUs as one large resource to reduce inefficiencies.[/QUOTE]
Aren't you using Misfit? Don't say you manage all that work by hand! :shock: It is a hell of a task! |
[QUOTE=LaurV;408901]Aren't you using Misfit? Don't say you manage all that work by hand! :shock: It is a hell of a task![/QUOTE]
I have scripts that manage all the worktodo and results (running Linux, and let's just say I didn't research enough of what was already written before creating my own.) I was actually referring to OpenCL efficiencies and maximizing throughput on the GPUs by having one program interacting with the AMD driver. It seems to have a lot of locking contention... |
Update #15 - August 31, 2015 ... BIGGGG progress.
390,697 in the last month. (more than 3 times last month)
32 different contributors 676 Factors found 51,063 P1/LL/DC work saved 20 contributors currently have assignments 21,936 Assignments out. (5 times last month) 378 estimated days to completion (AirSquirrels and Anonymous going like bandits) 1.04 years .. Sept 2016 ... though it could be much sooner if the big 2 above keep at it. |
Update #16 - September 30, 2015 ... zoom zoom
473,793 in the last month. (up another 60%)
34 different contributors 653 Factors found 47,567 P1/LL/DC work saved 20 contributors currently have assignments 19,327 Assignments out. 292 estimated days to completion (Anonymous AWOL for 10 days or this could be lower ... and the above stats higher) 0.8 years .. July 2016 |
Anonymous does have 165THzd of work out, or about 16 days worth going by historical production. The last two weeks of submitted results were in batches. I have a feeling we'll get a big dump of results on Monday, give or take a day.
|
I have a big equipment move coming, so I'm currently doing a burn down of all my active assignments. I should have a big return to high-throughput after that.
|
[QUOTE=airsquirrels;411750]I should have a big return to high-throughput after that.[/QUOTE]
_Please_ continue doing LLTF'ing as well (if you're so inclined, of course). We're currently over two years ahead of the DC'ers, but only a day or so ahead of the LL P-1'ers! :max: |
[QUOTE=chalsall;411751]but only a day or so ahead of the LL P-1'ers! :max:[/QUOTE]
oops. should I put the rambeasts on something else? |
[QUOTE=aurashift;411756]oops. should I put the rambeasts on something else?[/QUOTE]
LLTF "Let GPU72 Decide" or "What makes sense" (the latter to at least 74 bits, optimally 75). But, again, it's your kit; do what you enjoy! :smile: |
[QUOTE=chalsall;411760]LLTF "Let GPU72 Decide" or "What makes sense" (the latter to at least 74 bits, optimally 75).
But, again, it's your kit; do what you enjoy! :smile:[/QUOTE] They're CPU, six servers, forty cores each, with anywhere from 150GB-1.5TB each. They suck at LL though. 90 days for a test on 1 core. |
[QUOTE=aurashift;411761]They're CPU, six servers, forty cores each, with anywhere from 150GB-1.5TB each. They suck at LL though. 90 days for a test on 1 core.[/QUOTE]
Oh, sorry. I thought you were talking about GPUs. Definitely don't do TF'ing on a CPU. For CPUs, either LL or DC is fine. We're far behind in DC, but if you want to find the next MP, do LL. Personally I only DC because I use it to initially test, and then continue to ensure, the sanity of my CPUs. It was years ago now, but once I was able to tell that one of my CPUs was "unhealthy". Unfortunately this was on a mission critical box. Thanks to the advanced warning, I was able to move the critical services off the box just in time! Yeah GIMPS!!! :smile: |
[QUOTE=aurashift;411761]They're CPU, six servers, forty cores each, with anywhere from 150GB-1.5TB each. They suck at LL though. 90 days for a test on 1 core.[/QUOTE]
My two-bits.... You could do DC or P-1. They won't perform any better relatively but the current wave of DC are 1/4 to 1/5 the work of an LL....so your 90 day LL could become a 20 or so days DC. Or with all the RAM it's more than enough for P-1 tests that might take your cores a couple days each. Either one would be beneficial to GIMPS. |
[QUOTE=petrw1;411763]Either one would be beneficial to GIMPS.[/QUOTE]
Indeed. However, please know that GIMPS is currently overpowered with P-1. Secondly, P-1 can't help ensure the sanity of a machine (short of crashing and/or burning). Lastly, with that many cores, doing multi-threaded LL or DC'ing can bring down the time required significantly. My little dual eight (real) core machines can do two current DCs in about 24 hours. Just make sure you get your affinity settings correct. |
[QUOTE=chalsall;411765]Indeed. However, please know that GIMPS is currently overpowered with P-1. Secondly, P-1 can't help ensure the sanity of a machine (short of crashing and/or burning).
Lastly, with that many cores, doing multi-threaded LL or DC'ing can bring down the time required significantly. My little dual eight (real) core machines can do two current DCs in about 24 hours. Just make sure you get your affinity settings correct.[/QUOTE] they've got enough ECC that sanity isn't a problem, it's more like load test it until it crashes then fix it. Problem with these, is that they have 1/4 the performance for LL or DC and use up 4x the amount of space, so P-1 it is i guess. |
[QUOTE=aurashift;411766]Problem with these, is that they have 1/4 the performance for LL or DC and use up 4x the amount of space, so P-1 it is i guess.[/QUOTE]
Go for it! :smile: Just, please, keep feeding the P-1'ers with appropriately TF'ed candidates with your GPUs! |
[QUOTE=aurashift;411766]they've got enough ECC that sanity isn't a problem, it's more like load test it until it crashes then fix it. Problem with these, is that they have 1/4 the performance for LL or DC and use up 4x the amount of space, so P-1 it is i guess.[/QUOTE]
Though note that they will also have 1/4 the performance for P-1 ... it's just that P-1 work assignments are smaller and complete in days instead of weeks or months. Unless it's more effort than you want to put into it, doing DC with 2-4X multi-threading as Chris suggested might be worth considering. What do you mean by "4x the amount of space"? P-1 uses MUCH more memory (in stage 2) and if you mean disk space for save files P-1 actually takes more and PrimeNet saves all P-1 save files just in case you decide to re-do it with higher bounds. |
[QUOTE=petrw1;411770]Though note that they will also have 1/4 the performance for P-1 ... it's just that P-1 work assignments are smaller and complete in days instead of weeks or months.
Unless it's more effort than you want to put into it, doing DC with 2-4X multi-threading as Chris suggested might be worth considering. What do you mean by "4x the amount of space"? P-1 uses MUCH more memory (in stage 2) and if you mean disk space for save files P-1 actually takes more and PrimeNet saves all P-1 save files just in case you decide to re-do it with higher bounds.[/QUOTE] sorry i meant physical space, they're blades that take up four slots when the normal ones only take one. Sorry to hijack the thread |
[QUOTE=aurashift;411771]Sorry to hijack the thread[/QUOTE]
No worries. We're all friends around here. :smile: |
[QUOTE=aurashift;411771] Sorry to hijack the thread[/QUOTE]
Much much safer to do so here than in an Airport. |
I'll be doing partially a more substantial portion of the DCTF work I've already had queued for a day or two more until the weekend when I have time to do the equipment move, then it will be pushed significantly back over to LLTF. Sorry for the day or two of lag in LLTF, time has been the critical resource this week.
|
[QUOTE=aurashift;411756]oops. should I put the rambeasts on something else?[/QUOTE]
How about doing ECM on the most wanted list of small mersenne numbers (less than 2000). I've only got 32gb and that limits the bounds you can use, but with 1.5tb..... |
is that what ecm-f is?
|
[QUOTE=chalsall;411751]_Please_ continue doing LLTF'ing as well (if you're so inclined, of course).
We're currently over two years ahead of the DC'ers, but only a day or so ahead of the LL P-1'ers! :max:[/QUOTE] I've queued 56 74,75 assignments in the P-1 range for the weekend. It escapes me at the moment how to find out how many P-1 assignments are being completed per day. I know I can't completely replace Rocky's throughput. |
[QUOTE=aurashift;411793]is that what ecm-f is?[/QUOTE]
No, that is finding factors for Fermat numbers. The other ECM type is for mersenne, but staying on the "most wanted" list requires some (or more) manual tweaking with the assignments. But you can try to get some ECM assignments (set P95 for "ECM on small mersenne numbers", and it will get them for you) and see how that works for your rigs. Still my money would go for pairing those cores in twos or threes, and do DC on them. Getting mismatching residues would be a foresign that your servers are going on the weeds, so you could take action in advance, as Chris pointed. |
[QUOTE=LaurV;411803]No, that is finding factors for Fermat numbers. The other ECM type is for mersenne, but staying on the "most wanted" list requires some (or more) manual tweaking with the assignments. But you can try to get some ECM assignments (set P95 for "ECM on small mersenne numbers", and it will get them for you) and see how that works for your rigs.
Still my money would go for pairing those cores in twos or threes, and do DC on them. Getting mismatching residues would be a foresign that your servers are going on the weeds, so you could take action in advance, as Chris pointed.[/QUOTE] I was thinking along the lines of P95 for stage 1 then GMP-ECM for stage 2, primenet doesn't hand out exponents less than 10k does it? |
[QUOTE=Mark Rose;411796]I've queued 56 74,75 assignments in the P-1 range for the weekend. It escapes me at the moment how to find out how many P-1 assignments are being completed per day.[/QUOTE]
Thanks for adding those to your "barbies". With regards to how many P-1 are completed, it's difficult to tell exactly from "outside" (read: without access to the Primenet DB). But, about 150 a day are done through GPU72, and I'd guess perhaps another 100 to 150 a day directly through Primenet. |
[QUOTE=Gordon;411822]I was thinking along the lines of P95 for stage 1 then GMP-ECM for stage 2, primenet doesn't hand out exponents less than 10k does it?[/QUOTE]
You can do ECM for any exponent; no lower limit...but on the other hand 20M seems to be the current upper limit. It is TF that you can't get assignments under 20K |
[QUOTE=chalsall;411762]For CPUs, either LL or DC is fine. We're far behind in DC, but if you want to find the next MP, do LL.
Personally I only DC because I use it to initially test, and then continue to ensure, the sanity of my CPUs.[/QUOTE] I'll second chalsall and suggest that you might consider doing some DC work. The spread between first and second time checks seems to be growing. I'm only doing DC work. I'm not that concerned about finding the next prime, otherwise I'd be doing all first-time checks, but then again I do have a theory that we missed a prime between M47 and M48, lurking somewhere in that 42M-57M range. I've been trying to find the likeliest cases where LL was done wrong the first time. If you're interested in helping out with that project, I can send you a lot of work. One little project I have is to go through all of the exponents between 35M and 58M that have already been checked twice and resulted in a mismatch. There's over 5000 of them though, so it's no small undertaking and I won't complete that by myself. In most of those cases, one or the other is correct but then we'll be able to say "this other CPU had a bad result" and we may be able to point to other work from that CPU and do advance double-checking on it to find other bad results... work that hasn't been DC'd already. Unfortunately that's a lot of manual work... getting exponents and manually getting them assigned to yourself, updating worktodo files, etc. I do it with my systems but then I have some batch files setup to help out so it's not too much overhead. Anyway, if it's something you're interested in, find that other thread: [URL="http://www.mersenneforum.org/showthread.php?t=20372"]http://www.mersenneforum.org/showthread.php?t=20372[/URL] |
[QUOTE=Mark Rose;411717]Anonymous does have 165THzd of work out, I have a feeling we'll get a big dump of results on Monday, give or take a day.[/QUOTE]
SHAZAAAMMMM 122,000 GhzDays turned in recently in the 44M range including 146 factored. |
[QUOTE=petrw1;411851]SHAZAAAMMMM 122,000 GhzDays turned in recently in the 44M range including 146 factored.[/QUOTE]
J. F. F. C! |
When I first started doing TF, that would be more than a year's worth of effort. Anonymous is insane. Anonymous will probably pass LaurV and I soon. I'm glad to see the progress being made!
|
[QUOTE=Mark Rose;411858]When I first started doing TF, that would be more than a year's worth of effort. Anonymous is insane. Anonymous will probably pass LaurV and I soon. I'm glad to see the progress being made![/QUOTE]
To be fair "airsquirrels" is actually averaging more per day (12K) than anonymous (10K) though some LLTF and some DCTF |
| All times are UTC. The time now is 08:00. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.