![]() |
![]() |
#34 |
"Eric"
Jan 2018
USA
22×5×11 Posts |
![]()
It turns out the K80 is significantly faster than the T4 (yesterday didn't manage to get an actual T4 instance to benchmark the speed difference). Generally I can get around 4.5ms/it with the K80 while the T4 is about 6ms/it after it starts to throttle down. Here are the results
K80: Code:
2019-09-08 16:32:02 90396263 OK 27360000 30.27%; 4559 us/sq; ETA 3d 07:49; 3ec593b85e44fb66 (check 1.12s) 2019-09-08 16:35:06 90396263 OK 27400000 30.31%; 4561 us/sq; ETA 3d 07:49; ddc1e8d47986dac7 (check 1.11s) 2019-09-08 16:38:09 90396263 OK 27440000 30.36%; 4561 us/sq; ETA 3d 07:45; d7ad382a0b7037d3 (check 1.11s) 2019-09-08 16:41:13 90396263 OK 27480000 30.40%; 4560 us/sq; ETA 3d 07:42; 30103cf3945858fc (check 1.10s) 2019-09-08 16:44:17 90396263 OK 27520000 30.44%; 4561 us/sq; ETA 3d 07:39; 496945fc83650272 (check 1.11s) Code:
2019-09-08 17:04:58 90396473 OK 27625600 30.56%; 5578 us/sq; ETA 4d 01:15; eb92599fdef067db (check 1.29s) 2019-09-08 17:06:24 90396473 OK 27640000 30.58%; 5884 us/sq; ETA 4d 06:35; ec8ff248229a7a0d (check 1.41s) 2019-09-08 17:10:27 90396473 OK 27680000 30.62%; 6038 us/sq; ETA 4d 09:11; 62cd89f65e138ebc (check 1.38s) 2019-09-08 17:14:30 90396473 OK 27720000 30.66%; 6038 us/sq; ETA 4d 09:08; f9c1164a0935332f (check 1.38s) |
![]() |
![]() |
![]() |
#35 |
Apr 2019
5·41 Posts |
![]() |
![]() |
![]() |
![]() |
#36 |
"Eric"
Jan 2018
USA
3348 Posts |
![]()
I personally think the fact that PRP has a reliable error check algorithm makes it superior to LL. Secondly, gpuowl seems to run faster on Nvidia cards that are bandwidth starved (for example my Titan V is significantly faster on gpuowl than CUDALucas, down from 1.12ms/it to 0.83ms/it with the switch. The Tesla K80 is no exception since it has a 1:3 FP32:FP64 rate but a relatively low memory bandwidth, the same should apply.), and obviously I would want maximum throughput. I would try CUDALucas sometimes if it's faster than gpuowl on the K80.
|
![]() |
![]() |
![]() |
#37 | |
If I May
"Chris Halsall"
Sep 2002
Barbados
1108510 Posts |
![]() Quote:
But... Doing some research on this I came across another Google offering: Kaggle. I registered (using my Google account), and after authenticating by way of a SMS message I was given a very similar Notebook environment. Pasted in my beta Notebook and clicked run, and it happily gave me a Tesla P100-PCIE-16GB producing ~1,200 GHzD/D of TF'ing -- no changes to my code. Clicked on "Commit and Run" and I was allowed to launch two more batch runs, each with another P100! The interactive session is limited to nine (9) hours, while the batch runs are limited to six (6). They also provide 5 GB of persistent storage per Notebook. Like, wow! Last fiddled with by chalsall on 2019-09-08 at 22:50 Reason: Smelling mistake. |
|
![]() |
![]() |
![]() |
#38 | |
"Eric"
Jan 2018
USA
22×5×11 Posts |
![]() Quote:
Last fiddled with by xx005fs on 2019-09-08 at 23:23 |
|
![]() |
![]() |
![]() |
#39 | |
"Dylan"
Mar 2017
10010100102 Posts |
![]() Quote:
Well, butter my biscuit, I have created code for this. This took a bit of tinkering, but here it is: Code:
import os.path #Use apt-get to get boinc !apt-get install boinc boinc-client #cp boinc, boinccmd to working directory !cp /usr/bin/boinc /content !cp /usr/bin/boinccmd /content #create a slots directory if it doesn't exist(otherwise boinc doesn't work) if not os.path.exists('/content/slots'): !mkdir slots #launch the client #attach to projects as desired (here I used NFS@home) if not os.path.exists('/content/slots/0'): !boinc --attach_project https://escatter11.fullerton.edu/nfs/ (your account key here) else: !boinc |
|
![]() |
![]() |
![]() |
#40 | |
If I May
"Chris Halsall"
Sep 2002
Barbados
3·5·739 Posts |
![]() Quote:
![]() I have reached out to both Colaboratory and Kaggle, pointing to this thread and asking them if what we're doing here is considered OK by them, and if they have any comments or feedback. I consider this to be much like running compute on an employer's or client's kit -- best to get explicit permission. |
|
![]() |
![]() |
![]() |
#41 |
Romulan Interpreter
"name field"
Jun 2011
Thailand
3·23·149 Posts |
![]()
grrr... now because of you, they will nerf it...
![]() |
![]() |
![]() |
![]() |
#42 | |
Sep 2003
3×863 Posts |
![]()
FAQ says:
Quote:
|
|
![]() |
![]() |
![]() |
#43 | |
If I May
"Chris Halsall"
Sep 2002
Barbados
3·5·739 Posts |
![]() Quote:
I could really use a couple of more beta testers. If you have a GPU72 account and a Google account, and are willing to help out, please PM me. It's pretty simple. As always, you get all the credit for the work done on your behalf. |
|
![]() |
![]() |
![]() |
#44 |
"Yves"
Jul 2017
Belgium
83 Posts |
![]()
For the Kaggle user's, see https://www.kaggle.com/general/108481.
From now on, GPU usage is limited to 30 hours per week. |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Alternatives to Google Colab | kriesel | Cloud Computing | 11 | 2020-01-14 18:45 |
Notebook | enzocreti | enzocreti | 0 | 2019-02-15 08:20 |
Computer Diet causes Machine Check Exception -- need heuristics help | Christenson | Hardware | 32 | 2011-12-25 08:17 |
Computer diet - Need help | garo | Hardware | 41 | 2011-10-06 04:06 |
Workunit diet ? | dsouza123 | NFSNET Discussion | 5 | 2004-02-27 00:42 |