![]() |
[QUOTE=Fan Ming;532448]Compiled new version of gpuowl for Google colab.
[/QUOTE]which commit is that? |
[QUOTE=chalsall;532451]OK. Thanks, for the detailed report; I'll be able to clean this up tonight.[/QUOTE]
OK. As best as I can tell, you and Storm enjoyed a bit of quality "race condition" time during the outage and shortly following. As far as I can see, you shouldn't have any "hanging assignments" (a bit like "hanging chad," but different). As always, weirdness is where the interesting stuff is... |
[QUOTE=kriesel;532500]which commit is that?[/QUOTE]
The last commit on Dec 9 (I forgot to say that I use GMT+8...). [url]https://github.com/preda/gpuowl/commit/1af537800bedfee6eb3ed9fd4f93efdfe99a9ad1[/url] |
"Fixed" the annoying always-dirty tag in the version for gpuowl... also simplified it. (based on a script from kriesel) Git will clone the repository to gpuowl-master in google drive - which obviously can be changed.
[code] import os.path from google.colab import drive if not os.path.exists('/content/drive/My Drive'): drive.mount('/content/drive') %cd '/content/drive/My Drive/' !apt-get update !apt-get install gcc-8 g++-8 libgmp-dev !update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-8 10 !update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-8 10 !git clone https://github.com/preda/gpuowl.git gpuowl-master %cd gpuowl-master !git config core.fileMode false !make [/code] |
Sessions on Colab seem to have reduced from 12 hours to 10 hours now
|
[QUOTE=bayanne;532532]Sessions on Colab seem to have reduced from 12 hours to 10 hours now[/QUOTE]
I had one run 14 hours yesterday. I do not run it every day though. This may make a difference, in my case. |
[QUOTE=bayanne;532532]Sessions on Colab seem to have reduced from 12 hours to 10 hours now[/QUOTE]I saw that yesterday, and the 10 was exact, not the few minutes extra that 12 commonly got. Overnight got booted after 7 hours, and can't get a session now.
|
When I first started this, I got a P100 most of the time. Those days are gone, I believe. Other than a single instance with a P4, it has been K80 for quite a while now.
|
[QUOTE=storm5510;532587]When I first started this, I got a P100 most of the time. Those days are gone, I believe. Other than a single instance with a P4, it has been K80 for quite a while now.[/QUOTE]
Interesting. I've been getting P4s (although at ~50% capacity) and P100s as often as I get K80s for the last few days. I doubt we'll ever figure out the algorithms Google is using here. We're probably noise in their signals (appreciate the compute, though)... |
Today colab restricted me to 1 session. I had been running 2 the last couple months.
|
[QUOTE=chalsall;532590]Interesting.
I've been getting P4s (although at ~50% capacity) and P100s as often as I get K80s for the last few days. I doubt we'll ever figure out the algorithms Google is using here. We're probably noise in their signals (appreciate the compute, though)...[/QUOTE]And I've been having a hard time getting a K80 on the sessions I'm using to benchmark gpuowl P-1 run time scaling and limit. When I started, K80 was all I was getting; now it's the exception. It would be great for that if Google made it possible to specify what gpu model was required or preferred, instead of just none, gpu, or tpu. |
| All times are UTC. The time now is 23:03. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.