![]() |
Yeah, I got 4 of these today and hadn't noticed. I was studying the completed assignments page looking for factors, and saw it was doing TF in the 99M-100M range from 76 to 77 in 1 hour and 2 minutes, consistently, and was perplexed. I honestly thought the thing was bugged. Then I saw these Tesla V100-SXM2-16GB beasts churning away at 3600 GHzD/Day each and was like, :max:
|
Fastest V100 seen recently
1 Attachment(s)
Common speeds for me are 3700-3800 GHz-d/d. The one in the screen shot has been hovering around 4000. Granted, that is less than 10% variance, based on what I'm running now. Still, the variations in speed are interesting to watch. :smile: [COLOR=Silver](and bigger numbers are fun anyway.)[/COLOR]
|
[QUOTE=Aramis Wyler;555684]Then I saw these Tesla V100-SXM2-16GB beasts.[/QUOTE]
I've had mostly T4 lately (2 for about 90 minutes) but never a V100....just luck? |
[QUOTE=petrw1;556609]I've had mostly T4 lately (2 for about 90 minutes) but never a V100....just luck?[/QUOTE]
I /think/ these are only being given to the Paid Tier accounts. I also have never seen a V100, but often am given T4s. |
[QUOTE=chalsall;556634]I /think/ these are only being given to the Paid Tier accounts. I also have never seen a V100, but often am given T4s.[/QUOTE]
Same here. For the last three weeks or so, I have been given mostly T4s (alas, for very short times...) but never V100. I only use free Colab accounts. |
I can usually get 2 T4s on free accounts. They last almost exactly 1.5 hours. Even on paid, I still get offered a P100 occasionally, but I throw them back. They use similar power (250-300 W) to a V100 for much less output. The sweet thing about T4s is they do about 1.5 times the work of a P100 for 70 W.
|
111957247
I was given exponent 111957247 (75-76) in my Colab...it finished and submitted fine, and I can see the result in the "View Completed" section of gpu72.com.
It wasn't showing up on mersenne.org for my account....just gave it some time. When I looked at the exponent, it shows ktony completed the 75-76 already for that exponent on the same day (today 9/22). There wasn't a factor found and its really no big deal, and I am in absolutely no way accusing ktony of anything...I just found it curious. |
[QUOTE=LOBES;557574]I just found it curious.[/QUOTE]
It appears I have some kind of a "race condition" such that every once in awhile work is reassigned to Colab TF'ing clients. I have spent literally hours trying to figure out my Stupid Programmer Error on this. The good news is this manifests extremely rarely (less than once every 1,000 assignments). But I'm afraid I have no time to review my code-paths further at the moment. |
[QUOTE=chalsall;557580]But I'm afraid I have no time to review my code-paths further at the moment.[/QUOTE]
And as I understand it, you could tell us why you have no time to review your code-paths, but then you'd have to kill us. :smile: |
[QUOTE=PhilF;557584]And as I understand it, you could tell us why you have no time to review your code-paths, but then you'd have to kill us. :smile:[/QUOTE]
LOL... Thanks for that. :smile: No, I wouldn't need to kill you. But first, there would be a mountain of paperwork before I could even share the meta... :wink: |
To LOBES: Regardless of fault, I regret the involuntary poaching. Wasted work is a disappointment, and slows the progress. Thanks for your understanding.
[QUOTE]I can usually get 2 T4s on free accounts. They last almost exactly 1.5 hours.[/QUOTE] I have stopped pursuing GPUs on free accounts. I find that I can sometimes run as many as 8 CPU-only instances, distributed over 4 free accounts. These tend to run for 12 hours, though the occasional instance cuts out in the 3-5 hour range. If I happen to check, I can often restart these. In any case, getting 8 instances of P-1 which run for 12 hours, more or less, is a tidy bit of work which doesn't hit my electric bill. I'm still running P-1 on 4 local workers (8 cores) just because I have the RAM to do it. I have leaned toward have one LLDC worker (2 cores each) on each of the two machines, so that P-1 usually or always gets all the RAM it wants. I pulled my last discreet GPU weeks back because I was doing so much LLTF on Colab that a 1060 was a pitiful contribution. That 6700K machine is now drawing 180W running P95 at 4200 MHz. I am really thinking that it is cheaper to run on Colab, and stop running 2 machines 24/7. I'd still like to do some LL, though. This leads me back to running P-1 locally just to justify the RAM investment. On the 8 core box I could see doing P-1 on 4 cores, 2 workers each; and running LL, either 1st time or DC on the other 4. On the 4 core box, 2 for LLDC and 2 for P-1 seems right. All this leaves aside the whole PRP question. I don't have an absolute allegiance to LL, but I haven't followed PRP closely. |
| All times are UTC. The time now is 07:56. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.