mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Cloud Computing (https://www.mersenneforum.org/forumdisplay.php?f=134)
-   -   Google Diet Colab Notebook (https://www.mersenneforum.org/showthread.php?t=24646)

ATH 2020-01-29 17:05

I'm rarely getting a GPU these days, maybe every other day after trying many times, and it is always a T4 now which is very slow at CUDALucas. The sessions have lasted 6-9 hours lately.

Dylan14 2020-01-30 00:12

And the GPU session ended after 10 hours, which seems to be a rarity recently amongst us all.

kriesel 2020-01-30 00:43

Maybe it's the T4's. I got ~3:44 on a T4 today but from 2020-01-29 13:52:34 to 2020-01-29 23:25:32 colab/TeslaP100 (UTC)

xx005fs 2020-01-30 02:38

I am consistently getting T4s nowadays on my google accounts, so I have decided to shift from doing PRP to TF on Colab. They last for 10-12 hours using the manual reconnect trick I mentioned before, but Colab is extremely difficult to use nowadays, especially with multiple accounts to manage.

xx005fs 2020-02-02 02:05

The policy for their GPU quota seems to be changed again. Now I have to wait for exactly 24 hours before I will be assigned 10 hours of run time, else I will just get the "failed to get a GPU backend" message. It used to be that at UTC 0:00 the next day that every quota is reset.

JCoveiro 2020-02-03 16:39

Tesla P100-PCIE-16GB
 
Hi ppl!!

Last night I was running gpuowl on a Tesla-T4 (Google Colab).
But it seems it disconnected this morning. It was running at 6072 us/it.

Now I reconnected again and restarted the job from my drive.
The big surprise came when I saw it was running on a Tesla P100-PCIE-16GB
at 1001 us/it. What an amazing speed!! Cloud Computing FTW!

Btw, thanks alot for this thread!

kriesel 2020-02-03 17:33

[QUOTE=JCoveiro;536567]Hi ppl!!

Last night I was running gpuowl on a Tesla-T4 (Google Colab).
But it seems it disconnected this morning. It was running at 6072 us/it.

Now I reconnected again and restarted the job from my drive.
The big surprise came when I saw it was running on a Tesla P100-PCIE-16GB
at 1001 us/it. What an amazing speed!! Cloud Computing FTW!

Btw, thanks alot for this thread![/QUOTE]A T4 or P4 is better used on mfaktc, since it is relatively much slower at PRP and P-1. The K80, P4, P100, and T4 seen on Colab are described and compared a bit at [URL]https://www.mersenneforum.org/showpost.php?p=533245&postcount=15[/URL]

Dylan14 2020-02-05 16:00

possible symbolic link issues?
 
So recently when running my BOINC script that I use for running various projects on Google Colab, I get the following message while it's configuring everything to run BOINC:

[CODE]/sbin/ldconfig.real: /usr/local/lib/python3.6/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link[/CODE]Has anyone else been getting this error or used the package that this link is trying to reference?

chalsall 2020-02-06 14:56

Hmmm...
 
So, after a couple of weeks of getting one or two GPU instances per day for between one to three hours each (across six front ends), last night I got five T4s for ten hours each. Upon disconnection, I was immediately able to reattach, and got GPUs again.

Bizzare.

chalsall 2020-02-06 23:55

[QUOTE=chalsall;536883]Upon disconnection, I was immediately able to reattach, and got GPUs again.[/QUOTE]

And I'm still getting GPU backends just about every time I ask for one. Almost always lasting ten hours, and almost always full T4s (~10% of the time P100s).

Seven other GPU72_TF users also got a sizable amount of compute today. Fingers crossed this continues...

LaurV 2020-02-07 08:55

Did you put your left hand in the toilet bowl or so?

Send us the toilet!
:shock:


All times are UTC. The time now is 22:51.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.