mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Cloud Computing (https://www.mersenneforum.org/forumdisplay.php?f=134)
-   -   Google Diet Colab Notebook (https://www.mersenneforum.org/showthread.php?t=24646)

Fan Ming 2020-06-30 03:49

[QUOTE=moebius;549377]Thanks I missed that, I will try that later. I let the DC go through before colab throws me out.


2020-06-29 19:35:51 Tesla T4-0 53537243 OK 2000000 (jacobi == -1)
/[/QUOTE]

Tesla T4 is mainly used for trial factorting(really fast). It's a little bit waste to use it for LL/PRP -- it's not very faster than K80/P4. The fastest GPU for LL/PRP is P100, thus mainly used for LL/PRP. In fact, The speed of K80/P4 is also okay.

BTW, try -nospin option, otherwise the output would be excessive on colab.

moebius 2020-06-30 06:41

[QUOTE=Fan Ming;549398]Tesla T4 is mainly used for trial factorting(really fast). It's a little bit waste to use it for LL/PRP -- it's not very faster than K80/P4. The fastest GPU for LL/PRP is P100, thus mainly used for LL/PRP. In fact, The speed of K80/P4 is also okay.
[/QUOTE]

Colab ended my runtime at 5:00 AM and deleted my gpuowl savefiles. However, I had saved this on my PC shortly beforehand so that colab couldn't get me. We continue with Tesla K80.

[SIZE="1"]2020-06-30 06:16:48 Tesla K80-0 OpenCL compilation in 0.01 s
2020-06-30 06:16:48 Tesla K80-0 53537243 LL 10500000 loaded: daf6d21a9cf17d2b
2020-06-30 06:20:14 Tesla K80-0 53537243 LL 10600000 19.80%; 2064 us/it; ETA 1d 00:37; 8d6c9cba23eac3a2
2020-06-30 06:23:42 Tesla K80-0 53537243 LL 10700000 19.99%; 2074 us/it; ETA 1d 00:41; 6b9075e74a24a356
2020-06-30 06:27:09 Tesla K80-0 53537243 LL 10800000 20.17%; 2075 us/it; ETA 1d 00:38; 335defcdc4c59d52
2020-06-30 06:30:37 Tesla K80-0 53537243 LL 10900000 20.36%; 2075 us/it; ETA 1d 00:34; ee962f85f121fae0
2020-06-30 06:34:04 Tesla K80-0 53537243 LL 11000000 20.55%; 2075 us/it; ETA 1d 00:31; b6a6e068b4a5fbb9
2020-06-30 06:37:32 Tesla K80-0 53537243 LL 11100000 20.73%; 2075 us/it; ETA 1d 00:28; dfbd758a5964c750
2020-06-30 06:37:32 Tesla K80-0 53537243 OK 11000000 (jacobi == -1)[/SIZE]

axn 2020-06-30 14:43

A wild EPYC appears
 
[CODE]Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7B12
Stepping: 0
CPU MHz: 2250.000
BogoMIPS: 4500.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0,1
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr arat npt nrip_save umip rdpid
[/CODE]
Freshly caught EPYC.

Appears to be about half as fast as Xeons :-(

PhilF 2020-06-30 15:08

[QUOTE=axn;549441]Freshly caught EPYC.

Appears to be about half as fast as Xeons :-([/QUOTE]

So that one you throw back if you catch one. Thanks for the heads up.

chalsall 2020-06-30 16:03

[QUOTE=PhilF;549442]So that one you throw back if you catch one. Thanks for the heads up.[/QUOTE]

LOL... This really is a bit like fishing and is a function of the type of work being done. (As opposed to phishing, which is an entirely different space...)

kriesel 2020-06-30 19:01

[QUOTE=moebius;549403]Colab ended my runtime at 5:00 AM and deleted my gpuowl savefiles. However, I had saved this on my PC shortly beforehand so that colab couldn't get me.[/QUOTE]Have you considered using google drive as your working directory? Automatic save from one session to the next.

moebius 2020-06-30 23:41

[QUOTE=kriesel;549463]Have you considered using google drive as your working directory? Automatic save from one session to the next.[/QUOTE]

Sure, I run gpu owl in /usr/local/bin. The savefiles are saved apparently in a subdirectory of the /content directory. You first have to find out which directory Google colab defines as [B]working directory, [/B]apparently /content/drive . It appears as if colab occasionally breaks the connection at runtime so that the file browser cannot be used, which can be negated simply by calling up the page in a new tab.
In any case, the program is still running, even if Google Inc. thinks it could attract paying customers because of its :poop: restrictions to its 2nd class commercial hardware.


[SIZE="1"]2020-06-30 23:16:25 Tesla K80-0 53537243 OK 38500000 (jacobi == -1)
2020-06-30 23:19:45 Tesla K80-0 53537243 LL 38700000 72.29%; 1999 us/it; ETA 0d 08:14; 3ae8c4a0dce1f3c9
2020-06-30 23:23:05 Tesla K80-0 53537243 LL 38800000 72.47%; 2000 us/it; ETA 0d 08:11; a2843afc77614d8e
2020-06-30 23:26:25 Tesla K80-0 53537243 LL 38900000 72.66%; 1999 us/it; ETA 0d 08:08; f08fc566607b0c59[/SIZE]
/

moebius 2020-07-01 08:59

[QUOTE=kriesel;549463]Have you considered using google drive as your working directory? Automatic save from one session to the next.[/QUOTE]

Sorry for the foul language in my last post, I tried to modify the Phyton program so that it runs from google drive. However, the savefile folders are still created in the /content/ directory and deleted even after the runtime. I am not very familiar with Phyton and colab, only a bit with C / C ++; Linux. Do you have a hint for me?


[SIZE="1"]import os.path
from google.colab import drive
if not os.path.exists('/content/drive/My Drive'):
drive.mount('/content/drive')
%cd '/content/drive/My Drive/gpuowl-master/'
!chmod 755 '/content/drive/My Drive/gpuowl-master/gpuowl.exe'
!cd '.' && /content/drive/My\ Drive/gpuowl-master/gpuowl.exe -prp 333XXXXXX[/SIZE]

Oh yes, I was still assigned a Tesla P100.

[SIZE="1"]2020-07-01 07:35:07 Tesla P100-PCIE-16GB-0 OpenCL compilation in 0.01 s
2020-07-01 07:35:09 Tesla P100-PCIE-16GB-0 333XXXXXX OK 52300000 loaded: blockSize 400, 160953a7a67244b6
2020-07-01 07:35:14 Tesla P100-PCIE-16GB-0 333XXXXXX OK 52300800 15.67%; 3430 us/it; ETA 11d 04:06; 7b9b2b946e50140d (check 1.79s)
2020-07-01 07:40:56 Tesla P100-PCIE-16GB-0 333XXXXXX OK 52400000 15.70%; 3432 us/it; ETA 11d 04:09; e77842f74a72a9a5 (check 1.79s)
2020-07-01 07:46:41 Tesla P100-PCIE-16GB-0 333XXXXXX OK 52500000 15.73%; 3431 us/it; ETA 11d 04:01; 0c2480f5c0259489 (check 1.79s)[/SIZE]

bayanne 2020-07-02 06:08

Completed exponents are not being reported from Colab ...

Can you advise?

moebius 2020-07-02 09:13

I have now logged in with a different Google account and lo and behold, the save, log and result files are created where they belong, in[SIZE="1"] /content/drive/My Drive/gpuowl-master/[/SIZE]. Hope they stay there persistent.
Maybe something has been changed in the configuration of colab.

bayanne 2020-07-02 10:51

[QUOTE=bayanne;549592]Completed exponents are not being reported from Colab ...

Can you advise?[/QUOTE]

The p-1 factoring exponents are being accepted, but not the TF ones


All times are UTC. The time now is 22:30.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.