![]() |
[QUOTE=LaurV;536946]Did you put your left hand in the toilet bowl or so? Send us the toilet![/QUOTE]
I don't understand the reference, but something has changed (to our advantage). Several people are getting sustained instances again. I just spun up five out of six requests. |
[QUOTE=chalsall;536973]I don't understand the reference, but something has changed (to our advantage). Several people are getting sustained instances again. I just spun up five out of six requests.[/QUOTE]
Something has changed indeed. I can no longer get any Skylake CPU sessions. Last night, after trying for a while, I even got the message no CPU backends are available. :/ |
[QUOTE=PhilF;536974]Something has changed indeed. I can no longer get any Skylake CPU sessions. Last night, after trying for a while, I even got the message no CPU backends are available. :/[/QUOTE]
Ditto. OTOH, for now, P100s are becoming available somewhat more frequently, so... |
It is definitely tracked by account. I use 2 accounts, daily, always from the same machine and browser, but different Google logins. Never had a problem before with CPU availability.
But now one account, the one I was resetting over and over trying to get a Skylake, cannot get a CPU session at all. The other account has no problem getting a CPU session, as long as a Haswell or Broadwell is good enough. EDIT: Now that I have given the first account some time to "cool off", I was able to obtain a CPU session with it. |
[QUOTE=chalsall;536973]I don't understand the reference[/QUOTE]
Well... similar to [URL="https://www.urbandictionary.com/define.php?term=step%20in%20shit"]this[/URL], or [URL="https://en.wikipedia.org/wiki/Horseshoe#Superstition"]this[/URL], or last two sections of [URL="https://dreamastromeanings.com/what-does-it-mean-when-a-bird-poops-on-you/"]this[/URL]... (random links) |
Google has updated their Colab FAQ, and also they have introduced a Pro version of Colab... $9.99 a month with up to 24 hour runtimes, more memory and higher priority on fast GPU's... though they aren't guaranteeing anything.
[url]https://research.google.com/colaboratory/faq.html[/url] [url]https://colab.research.google.com/signup[/url] |
[QUOTE=kracker;537025]Google has updated their Colab FAQ, and also they have introduced a Pro version of Colab... $9.99 a month with up to 24 hour runtimes, more memory and higher priority on fast GPU's... though they aren't guaranteeing anything.
[url]https://research.google.com/colaboratory/faq.html[/url] [url]https://colab.research.google.com/signup[/url][/QUOTE] Nice, the last point is also important: "Where is Colab Pro available? For now, Colab Pro is only available in the US." |
[QUOTE=kracker;537025]Google has updated their Colab FAQ, and also they have introduced a Pro version of Colab... $9.99 a month with up to 24 hour runtimes, more memory and higher priority on fast GPU's... though they aren't guaranteeing anything.
[/QUOTE] Finally. I've signed up. Let's see how this works out. |
[QUOTE=R. Gerbicz;537026]Nice, the last point is also important:
"Where is Colab Pro available? For now, Colab Pro is only available in the US."[/QUOTE] I would be very interested when Colab Pro becomes available elsewhere ... I wonder what the limitations are on the number of instances you can run at any one time |
Detecting gpu model or no-gpu
Detecting gpu model or no-gpu and branching to TF or P-1 benchmarking or mprime-only or whatever accordingly may be possible by adapting the code at the front of
[URL]https://colab.research.google.com/notebooks/pro.ipynb#scrollTo=23TOba33L4qf:[/URL] [CODE][COLOR=#000000][FONT=monospace][COLOR=#000000]gpu_info = !nvidia-smi[/COLOR] [COLOR=#000000]gpu_info = [/COLOR][COLOR=#a31515]'\n'[/COLOR][COLOR=#000000].join(gpu_info)[/COLOR] [COLOR=#af00db]if[/COLOR][COLOR=#000000] gpu_info.find([/COLOR][COLOR=#a31515]'failed'[/COLOR][COLOR=#000000]) >= [/COLOR][COLOR=#09885a]0[/COLOR][COLOR=#000000]:[/COLOR] [COLOR=#795e26]print[/COLOR][COLOR=#000000]([/COLOR][COLOR=#a31515]'Select the Runtime → "Change runtime type" menu to enable a GPU accelerator, '[/COLOR][COLOR=#000000])[/COLOR] [COLOR=#795e26]print[/COLOR][COLOR=#000000]([/COLOR][COLOR=#a31515]'and then re-execute this cell.'[/COLOR][COLOR=#000000])[/COLOR] [COLOR=#af00db]else[/COLOR][COLOR=#000000]:[/COLOR] [/FONT][/COLOR][COLOR=#000000][FONT=monospace] [COLOR=#795e26]print[/COLOR][COLOR=#000000](gpu_info)[/COLOR][/FONT][/COLOR][/CODE]After a quick test, I think something like the following[CODE][COLOR=#000000][FONT=monospace][COLOR=#000000]gpu_info = !nvidia-smi[/COLOR] [COLOR=#000000]gpu_info = [/COLOR][COLOR=#a31515]'\n'[/COLOR][COLOR=#000000].join(gpu_info)[/COLOR] [COLOR=#af00db]if[/COLOR][COLOR=#000000] gpu_info.find([/COLOR][COLOR=#a31515]'failed'[/COLOR][COLOR=#000000]) >= [/COLOR][COLOR=#09885a]0[/COLOR][COLOR=#000000]:[/COLOR] [COLOR=#795e26]print[/COLOR][COLOR=#000000]([/COLOR][COLOR=#a31515]'Select the Runtime → "Change runtime type" menu to enable a GPU accelerator, '[/COLOR][COLOR=#000000])[/COLOR] [COLOR=#795e26]print[/COLOR][COLOR=#000000]([/COLOR][COLOR=#a31515]'and then re-execute this cell.'[/COLOR][COLOR=#000000])[/COLOR] [COLOR=#af00db]else[/COLOR][COLOR=#000000]:[/COLOR] [COLOR=#795e26]print[/COLOR][COLOR=#000000](gpu_info)[/COLOR] [COLOR=#af00db]if[/COLOR][COLOR=#000000] gpu_info.find([/COLOR][COLOR=#a31515]'Tesla T4'[/COLOR][COLOR=#000000]) >= [/COLOR][COLOR=#09885a]0[/COLOR][COLOR=#000000]:[/COLOR] [COLOR=#795e26]print[/COLOR][COLOR=#000000] ([/COLOR][COLOR=#a31515]'code here for Tesla T4 case.'[/COLOR][COLOR=#000000])[/COLOR] [COLOR=#af00db]else[/COLOR][COLOR=#000000]: [/COLOR] [COLOR=#af00db]if[/COLOR][COLOR=#000000] gpu_info.find([/COLOR][COLOR=#a31515]'Tesla P4'[/COLOR][COLOR=#000000]) >=[/COLOR][COLOR=#09885a]0[/COLOR][COLOR=#000000]:[/COLOR] [COLOR=#795e26]print[/COLOR][COLOR=#000000] ([/COLOR][COLOR=#a31515]'code here for Tesla P4 case.'[/COLOR][COLOR=#000000])[/COLOR] [COLOR=#af00db]else[/COLOR][COLOR=#000000]:[/COLOR] [COLOR=#af00db]if[/COLOR][COLOR=#000000] gpu_info.find([/COLOR][COLOR=#a31515]'Tesla P100'[/COLOR][COLOR=#000000]) >= [/COLOR][COLOR=#09885a]0[/COLOR][COLOR=#000000]:[/COLOR] [COLOR=#795e26]print[/COLOR][COLOR=#000000] ([/COLOR][COLOR=#a31515]'code here for Tesla P100 case'[/COLOR][COLOR=#000000])[/COLOR] [COLOR=#af00db]else[/COLOR][COLOR=#000000]:[/COLOR] [COLOR=#af00db]if[/COLOR][COLOR=#000000] gpu_info.find([/COLOR][COLOR=#a31515]'Tesla K80'[/COLOR][COLOR=#000000]) >= [/COLOR][COLOR=#09885a]0[/COLOR][COLOR=#000000]:[/COLOR] [COLOR=#795e26]print[/COLOR][COLOR=#000000] ([/COLOR][COLOR=#a31515]'code here for Tesla K80 case'[/COLOR][COLOR=#000000])[/COLOR] [COLOR=#af00db]else[/COLOR][COLOR=#000000]:[/COLOR] [/FONT][/COLOR][COLOR=#000000][FONT=monospace] [COLOR=#795e26]print[/COLOR][COLOR=#000000] ([/COLOR][COLOR=#a31515]'unexpected gpu model'[/COLOR][COLOR=#000000])[/COLOR][/FONT][/COLOR][/CODE] or a switch/case structure if that's available |
Draft cpu-only and gpu-model specific branching script
I've made a Colab script for multiple branches for cpu-only or various gpu models.
Run with GPU checked on Runtime, Change Runtime Type. If you forget, it will remind you. If you get only a cpu, it will ask whether to continue or quit. If you get a gpu, it will branch according to which model, to a separate folder on Google Drive. You can stop the script later and restart it to see if a gpu has become available. The folders for the different branches can contain different gpuowl work type, worktodo, or config file, etc. Or different mfaktc tunes for different gpu models. It also warns if the worktodo file size seems too small. See [URL="https://www.mersenneforum.org/showpost.php?p=537155&postcount=16"]https://www.mersenneforum.org/showpost.php?p=537155&postcount=16 [/URL]for the script attachment and more info. I've tried it a bit and the easiest bugs were fixed; the rest haven't been found yet. Updates to the base script if any will be posted there. |
| All times are UTC. The time now is 22:51. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.