View Single Post
Old 2019-12-19, 22:18   #15
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

23·677 Posts
Default Gpu models available through Google Colab

The model allocated to a session may be any of the following. There is currently no way to select a model or indicate a gpu-model preference or requirement. (There are reports of V100 in the paid tier, but I have no data on it from the free tier.)

I think all prices below are used, except Radeon VII.

Code:
Tesla P100 https://www.techpowerup.com/gpu-specs/tesla-p100-pcie-16-gb.c2888
16GB HBM2 732 GB/sec dual-slot 250W FP64 4.763TFLOPS (1/2) 
1175 GhzD/day TF, 173.4 LL(95M)
$2150 on eBay
indicates 0 of 16280 MiB allocated at Colab notebook launch
 
Tesla P4 https://www.techpowerup.com/gpu-specs/tesla-p4.c2879
8GB 192 GB/sec single-slot 75W FP64 178.2 GFLOPS (1/32)
512 GhzD/day TF, 32.5 LL (95M)
$1900 on eBay
indicates 0 of 7611 MiB allocated at Colab notebook launch

Tesla K80 (note, dual gpu, specs below are per card not per gpu; Colab free session may include single gpu but not dual)
12GBx2, 240.6 GB/sec x2, dual-slot 300W FP64 1371 GFLOPS (1/3)
766.7 GhzD/day TF, 115.1 LL (95M)
$325 on eBay
indicates 0 of 11441 MiB allocated at Colab notebook launch

Tesla T4
16GB 320 GB/sec single-slot 70W FP64 254.4 GFLOPS (1/32)
2467. GhzD/day TF, 59.3 LL (95M)
$1600 on eBay
indicates 0 of 15079 MiB allocated at Colab notebook launch

Tesla V100
16GB HBM2 897 GB/sec mezzanine or dual-slot 250W FP64 7.834 TFLOPS (1/2) https://www.techpowerup.com/gpu-specs/tesla-v100-sxm2-16-gb.c3018
4162. GhzD/day TF, 221 LL (95M)
$2900 on eBay
(never seen one of these in Colab free myself)
Compare to:
Code:
Tesla C2075
6 GB 144 GB/sec dual-slot 247W FP64 515.2 GFLOPS (1/2)
282.2 GhzD/d TF, 22.2 LL (95M)
$80 on eBay

Radeon VII:
16 GB HBM2 1024 GB/sec dual-slot 295W FP64 3.36 TFLOPS (1/4)
1113.6 TF, 280.9 LL (95M), the PRP king currently
$800+ on eBay

RTX2080:
8GB 448 GB/sec dual-slot 215W FP64 314.6 GFLOPS (1/32)
2703 GHzD/d TF, 65 LL (95M)
$500 on eBay
Note: in gpuowl, use -maxAlloc m, where m is megabytes limit per gpuowl instance, g is free megabytes on the idle gpu, n is number of gpuowl instances per gpu, b = 1000 or perhaps more if there are problems at 1000;
m<=(g -b)/n.
Or go higher when using multiple instances per gpu and memlock and -pool in gpuowl V7
Note 2: the above prices are as of the original post date, and have changed considerably since.


Top of this reference thread: https://www.mersenneforum.org/showthread.php?t=24839
Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1

Last fiddled with by kriesel on 2021-06-17 at 19:58 Reason: minor edits
kriesel is offline