![]() |
[QUOTE=chalsall;527967]
Or are you instead asking how to get the executable into Kaggle? In that case: [CODE]!wget URL_OF_PACKAGE !tar -xzvf PACKAGE !PATH/EXECUTABLE[/CODE] [/QUOTE] Thanks, I needed that. Just for me: need to turn on the internet at the setting section on Kaggle. |
What if someone using Google's hardware actually found a new Mersenne prime? Is there anything in the award rules that mentions anything about who actually owns the hardware that found the prime?
|
[QUOTE=PhilF;527996]What if someone using Google's hardware actually found a new Mersenne prime? Is there anything in the award rules that mentions anything about who actually owns the hardware that found the prime?[/QUOTE][URL]https://www.mersenne.org/legal/#rules[/URL] says you need written proof. See "Evidence of Authority". I'd think that a publicly available web page offering free computing would suffice if printed out. Mersenne.org board members would need to decide that if it comes up, if not beforehand.
|
[QUOTE=chalsall;527809]A couple of things you might look into is setting up an NFS connection between your instance(s) and a "public-facing" server you have control of. Or, you could simply "scp" the files out every half hour or so (again, to a public-facing server you control).[/QUOTE]
Because this is a useful metric, I thought I'd share... [CODE] root@38283cd23dd0:~/data# dd bs=1048576 count=10 </dev/urandom > ten_m.dat 10+0 records in 10+0 records out 10485760 bytes (10 MB, 10 MiB) copied, 0.0530785 s, 198 MB/s root@38283cd23dd0:~data# scp -P A_PORT ten_m.dat AN_ACCOUNT@ssh.instanceroot.com:~/ AN_ACCOUNT@ssh.instanceroot.com's password: ten_m.dat 100% 10MB 2.4MB/s 00:04 root@38283cd23dd0:~/data# dd bs=1048576 count=1000 </dev/urandom > one_g.dat 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB, 1000 MiB) copied, 7.23863 s, 145 MB/s root@38283cd23dd0:~/data# scp -P 53124 one_g.dat AN_ACCOUNT@ssh.instanceroot.com:~/ AN_ACCOUNT@ssh.instanceroot.com's password: one_g.dat 100% 1024MB 1.3MB/s 13:18 root@38283cd23dd0:~/data# [/CODE] Four seconds for a 10MB file and 1.3 MB/s sustained for a 1GB file. Not too bad... :cool: |
Sorry to repost about this issue -- but my main google account is having serious issues using Colab at this point. I can no longer get a session longer than a few hours, and as of this AM, I keep getting repeatedly disconnected after just a couple minutes. It's not saying no GPU is available, it's allowing me to reconnect then simply dropping me after 2-3 minutes.
Is there anything to do regarding this? All I've been doing is running mfaktc. I have a secondary account that is running for the past 4-5 days reconnecting each 12 hours continuously with no issues. |
[QUOTE=mnd9;528046]
I have a secondary account that is running for the past 4-5 days[B] reconnecting each 12 hours continuously[/B] with no issues.[/QUOTE] Well, maybe in a couple of days you´ll start having trouble with this one as well. it´s probably better to give them a break for one week or so. I think they tend to "refrain" people from using their rigs continuously. In the meantime, try using CPUs instead and see what happens. Do some ECM curves using mprime (if you don´t mind connecting your google drive). |
[QUOTE=chalsall;528008]Four seconds for a 10MB file and 1.3 MB/s sustained for a 1GB file. Not too bad... :cool:[/QUOTE]
Further data, this time from Kaggle:[CODE] root@Kaggle_GPU2:~/data# dd bs=1048576 count=10 </dev/urandom > ten_m.dat 10+0 records in 10+0 records out 10485760 bytes (10 MB, 10 MiB) copied, 0.060723 s, 173 MB/s root@Kaggle_GPU2:~/data# scp -P SSH_PORT ten_m.dat AN_ACCOUNT@ssh.instanceroot.com:~/ ten_m.dat 100% 10MB 9.8MB/s 00:01 root@Kaggle_GPU2:~/data# dd bs=1048576 count=1000 </dev/urandom > one_g.dat 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB, 1000 MiB) copied, 6.09981 s, 172 MB/s root@Kaggle_GPU2:~/data# scp -P SSH_PORT one_g.dat AN_ACCOUNT@ssh.instanceroot.com:~/ one_g.dat 100% 1000MB 8.5MB/s 01:57[/CODE] Even better performance. Not really surprising to me, considering the Kaggle environment seems (subjectively) much snappier than Colabs. BTW, just to be clear (sorry, yet another plug), but this demonstrates that my [URL="https://mersenneforum.org/showthread.php?t=24840"]Reverse SSH Tunnel Service[/URL] works on Kaggle as well. I could *really* use a couple of more beta testers of this, to catch any "edge-cases". I need this to be rock-solid before I start having students try to use it. And, as an enticement... I think I might have figured out a way to offer persistent storage for Colab and Kaggle instances, without needing access to a Google Drive. Early testers of my Tunnel service will get first beta access to the storage... :smile: |
[QUOTE=mnd9;528046]Is there anything to do regarding this? All I've been doing is running mfaktc.[/QUOTE]
Your access to this service is entirely at the sufferance of Google. Today my two Google Accounts on my main workstation can't get GPU instances. My virtual self, shivering in a machine room somewhere in the Eastern USA, on the other hand, is having no problems. Just give it eight hours or so, and then try again. It *might* be doing something like I've always dreamed of doing if I had the opportunity to program an elevator control system -- the more someone presses the button, the longer it takes for the elevator to arrive... :wink: |
Kaggle
I suggest Kaggle gets its own thread and Kaggle-specific posts from here get moved.
|
[QUOTE=chalsall;528072]
It *might* be doing something like I've always dreamed of doing if I had the opportunity to program an elevator control system -- the more someone presses the button, the longer it takes for the elevator to arrive... :wink:[/QUOTE] "Soup Nazi" automation? |
Another update to the BOINC script. One of the things that I noticed is that with the old code, the account key is in plain sight while the program is running. This is potentially undesirable (unless you want as many credits as possible from others connecting to the same project with that key). So to correct this, I used the getpass command for the account key, which will hide the key:
[CODE]#@title BOINC test import os.path import subprocess import getpass #Use apt-get to get boinc !apt-get update !apt-get install boinc boinc-client #cp boinc, boinccmd to working directory !cp /usr/bin/boinc /content !cp /usr/bin/boinccmd /content #create a slots directory if it doesn't exist(otherwise boinc doesn't work) if not os.path.exists('/content/slots'): !mkdir slots #launch the client #attach to a project as desired project = input("Enter a URL of a BOINC project: ") #we hide the account key with getpass for security purposes acct_key = getpass.getpass("Enter your account key: ") if not os.path.exists('/content/slots/0'): subprocess.run(['boinc', "--attach_project", project, acct_key]) else: subprocess.run('boinc')[/CODE] |
| All times are UTC. The time now is 22:51. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.