mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware > Cloud Computing

Reply
 
Thread Tools
Old 2021-02-22, 10:50   #12
bayanne
 
bayanne's Avatar
 
"Tony Gott"
Aug 2002
Yell, Shetland, UK

1010011002 Posts
Default

Still waiting fo Colab Pro to be made available for use outside the US ...
bayanne is offline   Reply With Quote
Old 2021-02-22, 16:48   #13
danc2
 
Dec 2019

5×7 Posts
Default

Quote:
There are messages that pop up about usage limits.
For sure. The key word in my response was severe: "[no] severe availability issues". See the original post:
Quote:
can be run for a maximum of 12 hours per day without interruption.
This is actually for non-pro users, so the pro users may have even longer usage limits depending on the needs/demands of Google. However, I think that $9.99/month for up to four high-end GPUs (not to mention okay CPUs) I get intermittently which requires no ancilary costs to me (electricity, maintenance, cooling, etc.) is pretty fair, but that is also just my opinion.

Quote:
Still waiting fo Colab Pro to be made available for use outside the US
Bummer! Have you tried using a VPN to make a Google Account and log into the US endpoint? You may find a workaround that way and/or with a US payment method.

Last fiddled with by danc2 on 2021-02-22 at 17:13
danc2 is offline   Reply With Quote
Old 2021-02-22, 17:13   #14
S485122
 
S485122's Avatar
 
"Jacob"
Sep 2006
Brussels, Belgium

2·13·67 Posts
Default

Quote:
Originally Posted by danc2 View Post
...
Bummer! Have you tried using a VPN to make a Google Account and log into the US endpoint? You may find a workaround that way and/or with a US payment method.
Bummer ? Not a nice thing to say to a fellow forum user... And in the UK the word has a different meaning : Bummer in Wictionary.

Then disparaging someone because he does not want to go against the rules set by a provider is, how to say ... special ?

In the mean time somebody has suggested another explanation for your usage of the word. I might have been wrong in my understanding of what you wrote.

Jacob

Last fiddled with by S485122 on 2021-02-22 at 17:39 Reason: another explanation cropped up, thanks Kruoli :-)
S485122 is offline   Reply With Quote
Old 2021-02-22, 17:21   #15
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

2×33×107 Posts
Default

I'm always happy to see someone chip in and contribute to development.
The announcement was well crafted. That one errant URL could be fixed in post 1 with the assistance of a kind moderator on request.

Google Colaboratory has resorted at times to requiring ostensibly human image analysis before authorizing a Colab session.
Three by three arrays of little and sometimes unclear images, with a requirement to select each image that contains bicycles, or palm trees, or hills, or buses, etc. (One object category per challenge session.) Sometimes selected images are replaced with additional until no qualifying images remain; sometimes it's only the initial set of 9. And there have sometimes been child windows specifying it is for human interactive use, not bots, and requiring click confirmation that yes it's a human at the keyboard. (I wonder if Colab free use is where Google coders test their "verify it's a session with a human" algorithms.)

It detects closing the browser tab (or loss of internet connection or operating computer hosting the browser session), and shuts down the Colab VM.
https://www.mersenneforum.org/showpo...&postcount=201

There was the following caution posted; "Be careful using more than one Google account. Apparently people on the LCZero project were banned from using CoLab because they did that."
https://www.mersenneforum.org/showpo...7&postcount=32 To my knowledge we have not seen such an issue in GIMPS use of Colab.

Gpuowl reportedly is faster than CUDALucas on the same gpu model and exponent task. https://www.mersenneforum.org/showpo...9&postcount=36

Google Drive free capacity is 15 GB, including its trash folder. That is sufficient for PRP&proof runs in parallel on cpu and gpu at the current wavefront if used efficiently.
Note that Google offers multiple free mail, storage, etc accounts per person, so one's personal or other email and other cloud storage can be segregated by account, allowing multiple Colab-only accounts to be set up to use the full free 15GB each. Mprime and Gpuowl clean up after themselves.
Cleaning out the trash https://mersenneforum.org/showpost.p...postcount=1025
"If you'd like to purchase more Drive space, visit Google Drive. Note that purchasing more space on Drive will not increase the amount of disk available on Colab VMs. Subscribing to Colab Pro will."
https://research.google.com/colaboratory/faq.html
Standard plan Google One (100GB) is $20/year; Advanced (200GB) $30/year; Premium (2TB) $100/year. https://one.google.com/about#upgrade

Nominal Colab Free session max length is 12 hours cpu-only, 10 hours GPU. (TPU irrelevant to GIMPS)
Record longest observed (by me) Colab free session duration >26 hours (with gpu!) https://mersenneforum.org/showpost.p...&postcount=829
Briefest: ~9 minutes https://mersenneforum.org/showpost.p...&postcount=837

Nominal Colab Pro session max length is 24 hours.
kriesel is offline   Reply With Quote
Old 2021-02-22, 17:24   #16
kruoli
 
kruoli's Avatar
 
"Oliver"
Sep 2017
Porta Westfalica, DE

647 Posts
Default

Quote:
Originally Posted by S485122 View Post
Bummer ? Not a nice thing to say to a fellow forum user...
For what I can recall, I only heard that saying when referring to an unfortunate circumstance. As in: "Really unfortunate that Google still has not expanded this to other countries!" I am sure he is not scolding the forum user here.

Edit: Yes, there is also the other meaning, I do not want to deny that. In this case, I assumed "Bummer" as a shorthand for "That's a bummer".

Last fiddled with by kruoli on 2021-02-22 at 17:27 Reason: Additions.
kruoli is offline   Reply With Quote
Old 2021-02-22, 18:18   #17
danc2
 
Dec 2019

438 Posts
Default

No offense to you or anyone S485122. Kruoli understood my American definition/intention .

Thank you Kriesel for your insightful comments. If a moderator can help fix that link it would be greatly appreciated!
Quote:
Google Colaboratory has resorted at times to requiring ostensibly human image analysis before authorizing a Colab session.
Interesting. I've used Colab (free and Pro) for over 3 months and have not seen this, but will keep a look out for it.

Quote:
It detects closing the browser tab.
Yes, true. It also historically will shut down if there is no output to the screen. In the README and in the notebooks we instruct users to keep their tabs open.

Quote:
Be careful using more than one Google account.
Fair warning. Thank you.

Quote:
Gpuowl reportedly is faster than CUDALucas on the same gpu model and exponent task.
Indeed. See the Contributing section and subsection General of the repository. Pull requests are welcome.

Quote:
Google Drive free capacity is 15 GB..[sufficient] for PRP&proof runs in parallel on cpu and gpu....purchasing more space on Drive will not increase the amount of disk available on Colab VMs
Yes, you are right that a user can setup a Google account dedicated to GIMPS. But it depends on how many notebooks an account is running. A user may run two GPU backends/notebooks and I don't know there is a notebook limit for the CPU notebooks beyond CPU usage limits. A user may exceed the 15GiB drive limit if they create multiple notebooks requesting PRP assignments by opening 5 notebooks (if average is ~3.5GiB per first time test). That is true, users are more than welcome to purchase more space . We were trying to warn the user, but maybe we can rephrase to make it sound less like we favor LL tests. I'll talk to Teal about doing so.

I think that Google's comment on Colab VM space is deceptive. To clarify, we do not use Colab VM space (/sample_data or perhaps / I believe), but rather the Google Drive space (/drive because it is persistent, unlike the Colab VM space).

Quote:
Record longest observed (by me) Colab free session duration >26 hours (with gpu!)
Wow! That is amazing!
Thank you again for lending your expertise on this subject.

Last fiddled with by danc2 on 2021-02-22 at 18:21
danc2 is offline   Reply With Quote
Old 2021-02-22, 18:55   #18
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

2×33×107 Posts
Default

Since the Google Colab free gpu time allocation per account (~10 hours/day) is consumed twice as fast running two notebooks on one account, may as well run one at a time. This helps with crowding of Drive space, retiring one big set of proof generation temporary files before reserving (mprime) or creating (gpuowl) the next. For Colab Pro, multiple gpus available to one account, I'd probably set it up to branch by model so slow-DP models do TF, medium-DP do P-1, and only the fast-DP gpu runs PRP/proof, to conserve space. I have notebooks and drives set up with model-specific paths in Colab free doing something similar to handle latency differences.

With multiple Colab instances, on multiple hosts' browsers, I find it helpful to have a 1:1 mapping between Google account, mprime machine id, gpu app computerid, Google drive, and notebook instance. And easy to mess that up during initial setup. A plan and documentation helps, initially, or during cleanup.

My previous post was based on experience with Google Colab since Oct 2019. Things change.

Last fiddled with by kriesel on 2021-02-22 at 19:14
kriesel is offline   Reply With Quote
Old 2021-02-22, 19:19   #19
danc2
 
Dec 2019

5×7 Posts
Default

Quote:
Google Colab free gpu time per allocation per account (~10 hours/day) is consumed twice as fast...may as well run one at a time.
The usage limit is per notebook, not per account. See the FAQ. However, of course, one can decide to use less of their potential usage to do PRP tests and help the project out.

Quote:
With multiple Colab instances, on multiple hosts' browsers, I find it helpful to have a 1:1 mapping
Yes, definitely there are probably a lot of different ways to do it. We actually recommend using Firefox containers. There is also a Chrome equivalent called SessionBox, though we haven't tested this yet.

Quote:
My previous post was based on experience with Google Colab since Oct 2019. Things change.
Understood. I was only hoping to clarify the status of 2020/21. Your insight is appreciated.

Last fiddled with by danc2 on 2021-02-22 at 19:19
danc2 is offline   Reply With Quote
Old 2021-02-23, 07:01   #20
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

24×13×47 Posts
Default

Testing that right now. It seems to work well, albeit very slow. I am getting a lot of errors due to deprecated string conversions in cudaLucas, but beside of it, it works. "Slow" is because I got a shitty CPU/GPU, not because of the errors. I got a T4 (as you said: Bummer! That card is a waste on LL, and when I run TF from Chris, I almost never get one!) which would need about 70 hours (2 days and 22 hours) to LL-DC in 60M range (that's for comparison and recording). Hopefully it will be able to store the checkpoint residues properly into the drive (6G free space there) and resume properly when the card will vaporize (never lasts more than few hours, in this part of the world).

Question: I see you still have the version of mprime which offers ECM work, etc, but that is not accessible in the selection menu. Can you make it that we be able to select it, or input directly the numbers (work type) there? For example, if I like to play with Fermat numbers (yeah, I know, bad example, that's discouraged from the server side... but you got the idea).

Also, you could offer a selection between cudaLucas and mfaktc in case we get a T4 or a P100, etc., mfaktc would run "from the box" there.

I named the computer "tsweet" in your honor (because).

Last fiddled with by LaurV on 2021-02-23 at 08:50
LaurV is offline   Reply With Quote
Old 2021-02-23, 08:39   #21
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

100110001100002 Posts
Default

Ok, it kicked me out. I set the CPU work for PRP-CF-DC work, which would only take few hours, so I can do them in 2 or 3 puny sessions (puny because they don't seem to last more than 1-2-3 hours, here around).

However, there is another problem, now there is no GPU available for me, and the script won't run "cpu only", and as I said, there is quite a "party time" here when I get a GPU. Even rarer when this is a T4 or P100. Usually is the P4 which sucks at both LL and TF. I didn't get a K80 for a while (still good for LL) and I think they discontinue K80s, because K80s have a huge hunger for power. The V100 was never seen here on this side of the pond, we don't believe it exists, hehe, there are only conspiracies and lies!

So, to be functional, you still have to tickle it! Waiting for it!

Let me chose LL or TF with the GPU, if I get one card which I know is better for one or the other, and let me chose "nothing" if I get no GPU. Also, let me select the CPU work I like to do from the whole mprime list, so I can do "shorter" tasks, like P-1, ECM, etc., which I know will finish in few hours I can keep the steering wheel on my hands. And let PRPs that would take for me a month, for Ben Delo and Curtis C , they can do them faster. If I can't finish a 30 day colab task, because of improper storage, bad resuming, too much headache and manual work, stupidity, laziness, whatever, then the time and resources would be lost, and I won't help the project, moreover, I would keep colab resources busy when they could be put to better use by other people.

Then we talk.

Last fiddled with by LaurV on 2021-02-23 at 09:04
LaurV is offline   Reply With Quote
Old 2021-02-23, 16:58   #22
tdulcet
 
tdulcet's Avatar
 
"Teal Dulcet"
Jun 2018

508 Posts
Default

Quote:
Originally Posted by LaurV View Post
Testing that right now. It seems to work well, albeit very slow. I am getting a lot of errors due to deprecated string conversions in cudaLucas, but beside of it, it works. "Slow" is because I got a shitty CPU/GPU, not because of the errors.
Thanks for testing it and for the feedback! Those errors are in the CUDALucas source code, so there is not much we can do about them and they do not cause any known issues. Our GPU notebook just downloads and builds the latest version of CUDALucas, dynamically making a few changes to fix buffer overflow errors with the P100 and V100 GPUs.

Quote:
Originally Posted by LaurV View Post
Hopefully it will be able to store the checkpoint residues properly into the drive (6G free space there) and resume properly when the card will vaporize (never lasts more than few hours, in this part of the world).
LL DC tests should take less than 50 MiB of your Google Drive storage. First time PRP tests will take about 3.5 GiB.

Quote:
Originally Posted by LaurV View Post
Question: I see you still have the version of mprime which offers ECM work, etc, but that is not accessible in the selection menu. Can you make it that we be able to select it, or input directly the numbers (work type) there? For example, if I like to play with Fermat numbers (yeah, I know, bad example, that's discouraged from the server side... but you got the idea).
Yes, that would be a trivial change we will consider for the next version of our notebooks.

Quote:
Originally Posted by LaurV View Post
Also, you could offer a selection between cudaLucas and mfaktc in case we get a T4 or a P100, etc., mfaktc would run "from the box" there.
Our notebooks are only designed for primality testing (note the title of this thread). If you want to do trial factoring, I would recommend using the GPU72 notebook. Pull requests are welcome if someone wants to combine our notebooks with the GPU72 notebook.

Quote:
Originally Posted by LaurV View Post
Ok, it kicked me out. I set the CPU work for PRP-CF-DC work, which would only take few hours, so I can do them in 2 or 3 puny sessions (puny because they don't seem to last more than 1-2-3 hours, here around).

However, there is another problem, now there is no GPU available for me, and the script won't run "cpu only", and as I said, there is quite a "party time" here when I get a GPU.
We have a separate CPU only notebook for this purpose. If users use our GPU notebook for only CPU work, they cannot retry to get a GPU backend, which is why we created a separate CPU only notebook. If you are using the free Colab, we would recommend running both our "GPU and CPU" and "CPU only" notebooks get the most throughput.

Note that because of how MPrime is setup, users cannot currently change the CPU type of work after first running the notebooks, they can only change the GPU type of work. You can get around this by creating a new copy of the notebook with a different computer number value. Users can currently create up to 10 copies of both notebooks using computer number values of 0-9.

Quote:
Originally Posted by LaurV View Post
If I can't finish a 30 day colab task, because of improper storage, bad resuming, too much headache and manual work, stupidity, laziness, whatever, then the time and resources would be lost, and I won't help the project, moreover, I would keep colab resources busy when they could be put to better use by other people.
Everything should be completely automated. Note that with Colab Pro, first time primality tests on the GPU will only take around 2-3 days.
tdulcet is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Google Diet Colab Notebook Corbeau Cloud Computing 1179 2021-09-28 04:15
Primality testing of numbers k*b^n+c Viliam Furik Math 3 2020-08-18 01:51
Alternatives to Google Colab kriesel Cloud Computing 11 2020-01-14 18:45
Google Notebooks -- Free GPUs!!! -- Deployment discussions... chalsall Cloud Computing 3 2019-10-13 20:03
a new primality testing method jasong Math 1 2007-11-06 21:46

All times are UTC. The time now is 04:55.


Mon Oct 18 04:55:41 UTC 2021 up 86 days, 23:24, 0 users, load averages: 1.16, 1.06, 1.04

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.