![]() |
[QUOTE=James Heinrich;290791]That seems a lot. I can feed a 570 with only 2 cores of a 3930K, leaving 4 cores left for P-1... what is your 6-core CPU?[/QUOTE]
[QUOTE=Dubslow;290789]So you mean you had 6 instances on one 580?[/QUOTE] Ah, yes it would be a lot... Four cores and 4 instances (one core per instance). This is a 1055T, so not as efficient as Intel (and definitely not SB-E) Right now the other two cores do P-1 and run CUDALucas on another 580; this also leaves some headroom for the system to be usable. I was running 5 instances on a 580 along with CUDALucas on the other 580 and a P-1, but that made the system so unresponsive it was irritatng. I'm actually quite happy with how everything runs right now, but if we need a shift to P-1 as we mature in G272, then I can slow down on TFing a bit and add to P-1. Either way, if we notice more TF needs to be done, I can shift back. |
[QUOTE=flashjh;290792]I'm actually quite happy with how everything runs right now, but if we need a shift to P-1 as we mature in G272, then I can slow down on TFing a bit and add to P-1. Either way, if we notice more TF needs to be done, I can shift back.[/QUOTE]
I would argue GPUs should be maximally used to do TFing, rather than forgo the resource for P-1ing. We can always release candidates without P-1 done back to PrimeNet if we find we have too many cached. But only GPUs can do the TFing we're doing. |
[QUOTE=flashjh;290792]Ah, yes it would be a lot... Four cores and 4 instances (one core per instance). This is a 1055T, so not as efficient as Intel (and definitely not SB-E)
Right now the other two cores do P-1 and run CUDALucas on another 580; this also leaves some headroom for the system to be usable. I was running 5 instances on a 580 along with CUDALucas on the other 580 and a P-1, but that made the system so unresponsive it was irritatng. I'm actually quite happy with how everything runs right now, but if we need a shift to P-1 as we mature in G272, then I can slow down on TFing a bit and add to P-1. Either way, if we notice more TF needs to be done, I can shift back.[/QUOTE] I have to agree with chalsall, we have numerous people who have cores doing DC or LL that could help on the P-1 effort. I personally have 12 cores doing DC, 2 doing LL, 1 doing P-1, 1 doing a 332M and 11 cores running GPUs. With the advent of the 27.3 64 bit, I'll be switching a couple of cores to P-1 once I finish a few more DC's. |
[QUOTE=bcp19;290811]I have to agree with chalsall, we have numerous people who have cores doing DC or LL that could help on the P-1 effort. I personally have 12 cores doing DC, 2 doing LL, 1 doing P-1, 1 doing a 332M and 11 cores running GPUs. With the advent of the 27.3 64 bit, I'll be switching a couple of cores to P-1 once I finish a few more DC's.[/QUOTE]
I guess I could take a hint from this. I have 1 out of 6 cores on LL/DC now. With 2 mfaktc cores and 3 P-1, it is pretty low maintenance. The setup occasionally has to block an S2, but it also sometimes runs 2 S1's. When I finish my current DC I will shift that worker back to P-1. |
I'd actually say that 4 P-1 workers will probably cause you to get less overall throughput, as you'll need to have MaxHighMemWorkers=3 then, and that'll cause memory issues. You're more than 50% P-1, which is better than most people. I have 3/4 P-1, 1/4 mfatkc, and on an i3M laptop, 1 P-1 and one LL/DC (low memory). Since I've switched to 27.3, I've run into a lot more memory bandwidth bottlenecking (mfaktc dropped 8M/s) and was actually considering switching to 2 P-1 1 LL/DC on my main box.
|
Team "GPU to 72" is now 1, 2 and 3!
Now that PrimeNet is back, I'm happy to report that Team "GPU to 72" is now:
[URL="http://www.mersenne.org/report_top_teams_TF/"]#1 for Trial Factoring.[/URL] [URL="http://www.mersenne.org/report_top_teams_P-1/"]#2 for P-1 Factoring.[/URL] [URL="http://www.mersenne.org/report_top_teams/"]#3 Overall.[/URL] (For the last year. Not bad for four months of work.) Thanks for everything Team Mates!!! :smile: |
[QUOTE=Dubslow;290828]I'd actually say that 4 P-1 workers will probably cause you to get less overall throughput, as you'll need to have MaxHighMemWorkers=3 then, and that'll cause memory issues. You're more than 50% P-1, which is better than most people. .............[/QUOTE]
Good points. I would have to cut back the memory allocation per worker to run MaxHighMemWorkers=3. Just a minor detail: the CPU is 50% P-1, 33.3% feeding mfaktc, and 16.7% alternating between LL and DC right now. |
[CODE]Sorry KyleAskine, but you already have too many assignments.
In the last 30 days you have done on average 496.450 GHz Days of work per day. You currently have 2275 assignments totalling 9532.391 GHz Days of work assigned, or 19 days worth based on your history. The oldest is 9 days old.[/CODE] I thought you could check out 30 days at a time? |
[QUOTE=KyleAskine;290868]I thought you could check out 30 days at a time?[/QUOTE]
Up to a maximum of 2000 assignments... This was added at the same time as the GHzDays Saved metric was added, to prevent people from grabbing all the low TF level exponents as they become available. As I said above, I have no problem with people doing low TFing only one or two levels. But I think that everyone should have the opportunity to do so (limited, of course, by what's available from PrimeNet). |
[QUOTE=chalsall;290876]Up to a maximum of 2000 assignments...
This was added at the same time as the GHzDays Saved metric was added, to prevent people from grabbing all the low TF level exponents as they become available. As I said above, I have no problem with people doing low TFing only one or two levels. But I think that everyone should have the opportunity to do so (limited, of course, by what's available from PrimeNet).[/QUOTE] No worries! I would love those 69->70's sitting there, but I took all of the 70->71's yesterday, so I will deal with it! |
[QUOTE=KyleAskine;290877]No worries! I would love those 69->70's sitting there, but I took all of the 70->71's yesterday, so I will deal with it![/QUOTE]
Thanks for the tip! I grabbed some of those, though I'm taking them 69-72. |
| All times are UTC. The time now is 23:08. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.