![]() |
Just noticed that I have been allocated a Tesla P100-PCIE-16GB
1120 GhzD/D Nice! |
[QUOTE=bayanne;529683]Just noticed that I have been allocated a Tesla P100-PCIE-16GB
1120 GhzD/D Nice![/QUOTE] P100s and K80s are wasted in TF. They are much better suited to LL. :two cents: Incidentally, a P100 can complete a 50m DC in about 12 hrs! |
[QUOTE=bayanne;529683]Just noticed that I have been allocated a Tesla P100-PCIE-16GB
1120 GhzD/D Nice![/QUOTE] Oh please don't use the P100 for TF, it's not even as fast as a T4. Use them for P-1 or PRP, much better. |
Give me simple instructions to use them in P-1 or PRP, then I will use them.
It was not me that picked the model of Tesla to use :) |
For LL test:
1) Build cudalucas from source or use someone's prebuilt executable. Source available at [url]https://sourceforge.net/p/cudalucas/code/HEAD/tree/trunk/[/url] Change makefile to use [C]--generate-code arch=compute_60,code=sm_60[/C] (instead of 35) 2) Run cufftbench and threadbench 3) Create a worktodo with a manual assignment from mersenne.org 4) ???? 5) Profit I'm assuming you know how to use your google drive to host the files? |
[QUOTE=axn;529690]For LL test:
1) Build cudalucas from source or use someone's prebuilt executable. Source available at [url]https://sourceforge.net/p/cudalucas/code/HEAD/tree/trunk/[/url] Change makefile to use [C]--generate-code arch=compute_60,code=sm_60[/C] (instead of 35) 2) Run cufftbench and threadbench 3) Create a worktodo with a manual assignment from mersenne.org 4) ???? 5) Profit I'm assuming you know how to use your google drive to host the files?[/QUOTE] I am using a Mac not Linux or Windows I am running a 76 to 77 exponent which will take about 3 hrs 30 mins, and will accrue about 153 GHz-days. Running a 50m exponent will accrue about 92 GHz-days in about 12 hours. I will stick with TF |
[QUOTE=bayanne;529691]I am using a Mac not Linux or Windows[/QUOTE]
You should do the build directly in the colab instance. Just upload all the source files to your google drive. Connect to it in the colab notebook and make. That way it will link with the correct runtime and libraries as well. |
[QUOTE=axn;529692]You should do the build directly in the colab instance. Just upload all the source files to your google drive. Connect to it in the colab notebook and make. That way it will link with the correct runtime and libraries as well.[/QUOTE]
Sorry I don't class this as 'simple' |
[QUOTE=bayanne;529693]Sorry I don't class this as 'simple'[/QUOTE]
Understood. |
[QUOTE=bayanne;529693]Sorry I don't class this as 'simple'[/QUOTE]
:iws: #meeptoo |
gpu model
So far here, running 2 instances as often as I can get the necessary backend, Tesla K80 almost always, P100 once, T4 not yet. As Murphy would have it, the lone P100 occurrence landed on the mfaktc instance, not the gpuowl instance.
|
| All times are UTC. The time now is 23:03. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.