mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Cloud Computing (https://www.mersenneforum.org/forumdisplay.php?f=134)
-   -   Google Diet Colab Notebook (https://www.mersenneforum.org/showthread.php?t=24646)

bayanne 2019-11-05 06:53

Just noticed that I have been allocated a Tesla P100-PCIE-16GB
1120 GhzD/D

Nice!

axn 2019-11-05 07:46

[QUOTE=bayanne;529683]Just noticed that I have been allocated a Tesla P100-PCIE-16GB
1120 GhzD/D

Nice![/QUOTE]

P100s and K80s are wasted in TF. They are much better suited to LL. :two cents:

Incidentally, a P100 can complete a 50m DC in about 12 hrs!

xx005fs 2019-11-05 08:03

[QUOTE=bayanne;529683]Just noticed that I have been allocated a Tesla P100-PCIE-16GB
1120 GhzD/D

Nice![/QUOTE]

Oh please don't use the P100 for TF, it's not even as fast as a T4. Use them for P-1 or PRP, much better.

bayanne 2019-11-05 09:08

Give me simple instructions to use them in P-1 or PRP, then I will use them.

It was not me that picked the model of Tesla to use :)

axn 2019-11-05 09:48

For LL test:

1) Build cudalucas from source or use someone's prebuilt executable.
Source available at [url]https://sourceforge.net/p/cudalucas/code/HEAD/tree/trunk/[/url]
Change makefile to use [C]--generate-code arch=compute_60,code=sm_60[/C] (instead of 35)
2) Run cufftbench and threadbench
3) Create a worktodo with a manual assignment from mersenne.org
4) ????
5) Profit

I'm assuming you know how to use your google drive to host the files?

bayanne 2019-11-05 09:52

[QUOTE=axn;529690]For LL test:

1) Build cudalucas from source or use someone's prebuilt executable.
Source available at [url]https://sourceforge.net/p/cudalucas/code/HEAD/tree/trunk/[/url]
Change makefile to use [C]--generate-code arch=compute_60,code=sm_60[/C] (instead of 35)
2) Run cufftbench and threadbench
3) Create a worktodo with a manual assignment from mersenne.org
4) ????
5) Profit

I'm assuming you know how to use your google drive to host the files?[/QUOTE]

I am using a Mac not Linux or Windows

I am running a 76 to 77 exponent which will take about 3 hrs 30 mins, and will accrue about 153 GHz-days. Running a 50m exponent will accrue about 92 GHz-days in about 12 hours.
I will stick with TF

axn 2019-11-05 09:55

[QUOTE=bayanne;529691]I am using a Mac not Linux or Windows[/QUOTE]

You should do the build directly in the colab instance. Just upload all the source files to your google drive. Connect to it in the colab notebook and make. That way it will link with the correct runtime and libraries as well.

bayanne 2019-11-05 09:56

[QUOTE=axn;529692]You should do the build directly in the colab instance. Just upload all the source files to your google drive. Connect to it in the colab notebook and make. That way it will link with the correct runtime and libraries as well.[/QUOTE]

Sorry I don't class this as 'simple'

axn 2019-11-05 10:02

[QUOTE=bayanne;529693]Sorry I don't class this as 'simple'[/QUOTE]

Understood.

Uncwilly 2019-11-05 14:56

[QUOTE=bayanne;529693]Sorry I don't class this as 'simple'[/QUOTE]
:iws:
#meeptoo

kriesel 2019-11-05 15:34

gpu model
 
So far here, running 2 instances as often as I can get the necessary backend, Tesla K80 almost always, P100 once, T4 not yet. As Murphy would have it, the lone P100 occurrence landed on the mfaktc instance, not the gpuowl instance.


All times are UTC. The time now is 23:03.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.