![]() |
[QUOTE=axn;527395]How do you connect your google drive in kaggle?[/QUOTE]
No idea. Not even sure it's possible. |
[QUOTE=pinhodecarlos;527411]Anyway of not having to babysit every 12 hours?[/QUOTE]
Nope. At least on Colab. Please remember that this was intended for education -- humans in front of consoles. What we're doing with it is a bit "out of scope"; I'm just thrilled they appear to be OK with this usage case. I haven't had the time to drill down on it, but there /is/ an [URL="https://github.com/Kaggle/kaggle-api"]API for Kaggle[/URL], which looks like it could be used to automate even the spinning up of instances... |
Sorry for the possibly stupid question, but is the authorization code to mount Google Drive the Google Password or something else. Not keen on providing specific passwords to other places, if that's not the one required, so I didn't try it. Thx.
|
[QUOTE=EdH;527420]Sorry for the possibly stupid question, but is the authorization code to mount Google Drive the Google Password or something else.[/QUOTE]
There are no stupid questions, only stupid programmers... :wink: When you do a "drive.mount('/content/drive')" in a Colab Notebook Section and run it, the script will pause and a line like "Please click this link and copy-and-paste the resulting key to allow access". When you click on this link, a new browser window or tab pops up, asking you to confirm that you want to give access. You are then given a very long "one-time shared secret" to copy and paste back into the Notebook to continue. And, so, no... You are not actually giving the Notebook your Google credentials. You ARE, however, giving the instance complete read-write access to your entire Drive. Be VERY VERY careful that you trust any and all Notebooks to which you give access. There is nothing preventing it from deleting all your files, or perhaps worse, uploading them off to some "Blackhat" server somewhere... |
[QUOTE=Dylan14;527399]Indeed it is possible <to run ECM>.[/QUOTE]
Thanks, ran my first few curves last night. |
[QUOTE=chalsall;527421]. . .
You ARE, however, giving the instance complete read-write access to your entire Drive. Be VERY VERY careful that you trust any and all Notebooks to which you give access. There is nothing preventing it from deleting all your files, or perhaps worse, uploading them off to some "Blackhat" server somewhere...[/QUOTE]Does this access go away if I share a notebook and/or close the Colab session? Or, is there another avenue to revoke the connection? |
[QUOTE=EdH;527433]Does this access go away if I share a notebook and/or close the Colab session? Or, is there another avenue to revoke the connection?[/QUOTE]
I would /presume/ it goes away if you close the Colab session. But to be be honest except at the very beginning of this exercise, I haven't been attaching Drive(s) to any instances. In my mind it doesn't scale. Perhaps others here who do attach can speak to your question with greater authority than I can render. |
[QUOTE=chalsall;527434]I would /presume/ it goes away if you close the Colab session. But to be be honest except at the very beginning of this exercise, I haven't been attaching Drive(s) to any instances. In my mind it doesn't scale.
Perhaps others here who do attach can speak to your question with greater authority than I can render.[/QUOTE] Thanks! For now I'm probably not going to attach either. I have been able to compile CADO-NFS in a similar fashion to Dylan14's instructions on page 13 of this thread and plan to play with some other factoring packages. Indeed, this is fun, but also puzzling on a couple things: 1: I seem to have a session with 2 cores only, but quite a bit of RAM. 2: My disk stats show I am using 25GB of 46GB. I will probably be back with questions about sharing if any of my other factoring packages show promise. . . |
[QUOTE=Dylan14;526770]I did try that, but it didn't work as it gave me the same error. However, adding this instead worked:
[CODE]tasks.execpath=/content/cado/cado-nfs/build/bin/[/CODE][/QUOTE] I compiled a session similar to your script and ran it - thanks! Did you compile CADO-NFS telling it to use build/bin/ or did you rename it manually? It looks like you should be able to rename it at compile time by creating a local.sh file with the line: [code] build_tree="${up_path}/build/`hostname`" [/code]changed to your desired name. Is this possibly what you did, or am I reading the docs incorrectly? |
[QUOTE=EdH;527436]1: I seem to have a session with 2 cores only, but quite a bit of RAM.
2: My disk stats show I am using 25GB of 46GB.[/QUOTE] Yup. The amount of CPUs and RAM can vary depending on if you've also attached a GPU to the instance. WRT to the ~54% disk utilization, there's a bunch of sample data included with each instance. You can blow this away if you need some additional storage. |
[QUOTE=EdH;527437]I compiled a session similar to your script and ran it - thanks!
Did you compile CADO-NFS telling it to use build/bin/ or did you rename it manually? It looks like you should be able to rename it at compile time by creating a local.sh file with the line: [code] build_tree="${up_path}/build/`hostname`" [/code]changed to your desired name. Is this possibly what you did, or am I reading the docs incorrectly?[/QUOTE] I just renamed the folder manually to build/bin/ so that it would be easier to code. |
| All times are UTC. The time now is 22:02. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.