![]() |
[QUOTE=LaurV;528271]Well, we should try to "come up with" running cudaLucas on it.:razz:
K80 is a [SIZE=6][COLOR=Red][B]waste[/B][/COLOR][/SIZE] if used for TF. This card is flying like a rocket at LL.[/QUOTE] Well, [U][I][B][COLOR=DarkOrange]leaving it idle[/COLOR] [/B][/I][/U]is a waste. Mfaktc kicks out about 400GhzD/day on a K80. EACH Colab K80. But it's my understanding cudalucas on colab has been done. And even better, so has gpuowl. Just not by some of us. Yet. See post 29 for gpuowl, 178 for cudalucas. |
[QUOTE=LaurV;528283]here the pain is to store and retrieve the checkpoint files[/QUOTE]
Use the drive, Luke |
[QUOTE=axn;528275]BTDT. Got 3 DCs out of it. Major pain; Colab gets a conniption if you run it for long, and then you don't get GPU instance for a while.
Pretty fast, though. I estimated that if you run it full time, you could get about 60 GhzDay/day, which is pretty much in line with [URL]https://www.mersenne.ca/cudalucas.php[/URL] [/QUOTE]meh. Code it as mprime in the foreground via primenet, gpuowl as a subprocess. If the subprocess fails, oh well, try again next time around, and meanwhile mprime makes a little progress, plus hey, it's free, except for a few clicks & copy/paste every 12 hours plus delta. (cue chalsall: "Never send a human to do a machine's job." Who's up for scripting the restarts too, by something like Winbatch?) It's all "Just for Fun" (tm). When it stops being fun, do something else for a while. [URL]https://primes.utm.edu/bios/page.php?lastname=Woltman[/URL] |
[QUOTE=kriesel;528285]Well, [U][I][B][COLOR=DarkOrange]leaving it idle[/COLOR] [/B][/I][/U]is a waste.
Mfaktc kicks out about 400GhzD/day on a K80. EACH Colab K80. But it's my understanding cudalucas on colab has been done. And even better, so has gpuowl. Just not by some of us. Yet. See post 29 for gpuowl, 178 for cudalucas.[/QUOTE] It’s not a waste, it’s avoided energy, which is good for Greta Thunberg. |
Anyone has a compiled version of gpuowl that works on Colab and/or Kaggle?
Did anyone test if it was faster than CUDALucas? |
[QUOTE=kriesel;528290]Who's up for scripting the restarts too, by something like Winbatch?)[/QUOTE]
I haven't had the cycles, but has anyone explored the [URL="https://github.com/Kaggle/kaggle-api"]Kaggle API[/URL] yet? |
[QUOTE=chalsall;528310]I haven't had the cycles, but has anyone explored the [URL="https://github.com/Kaggle/kaggle-api"]Kaggle API[/URL] yet?[/QUOTE]
Not yet, but it is on my TODO. Launching 10 batches and harvesting their results twice a day is time consuming. |
[QUOTE=kriesel;528290]Code it as mprime in the foreground via primenet, gpuowl as a subprocess.[/QUOTE]Oops, wrong terminology. It's background and foreground.
|
[QUOTE=ATH;528301]Anyone has a compiled version of gpuowl that works on Colab and/or Kaggle?
Did anyone test if it was faster than CUDALucas?[/QUOTE]Haven't done it myself on Colab yet or anything at all in Kaggle, but Mihai does his Gpuowl development on linux so the makefile should work well. Git clone, make, then optionally create a gpuowl config.txt. Then copy over from the Colab VM to a Google drive folder, and (re)use like other Colab gpu apps. Direct gpu testing here on Windows has shown gpuowl is usually [U]slightly[/U] faster than CUDALucas on the same GTX10xx gpu. |
[QUOTE=ATH;528301]Anyone has a compiled version of gpuowl that works on Colab and/or Kaggle?
Did anyone test if it was faster than CUDALucas?[/QUOTE] I succeeded at compiling gpuowl on Colab after soving many compilation errors. Attached is the compiled executable. Steps for using this executable on Google Colab(Ignore if you already know): 1.Create a folder on Google drive named "gpuowl-master". 2.Upload attached executable "gpuowl.exe" in this folder. 3.Use this ipynb code(don't forget to turn GPU accelerate on): [CODE]import os.path from google.colab import drive if not os.path.exists('/content/drive/My Drive'): drive.mount('/content/drive') %cd '/content/drive/My Drive/gpuowl-master/' !cp 'gpuowl.exe' /usr/local/bin/ !chmod 755 '/usr/local/bin/gpuowl.exe' !/usr/local/bin/gpuowl.exe -use ORIG_X2[/CODE] It seems work well, here is the output information(manually stopped after I see it seems work well): [CODE]/content/drive/My Drive/gpuowl-master 2019-10-19 16:31:38 gpuowl 2019-10-19 16:31:38 Note: no config.txt file found 2019-10-19 16:31:38 config: -use ORIG_X2 2019-10-19 16:31:38 77936867 FFT 4608K: Width 256x4, Height 64x4, Middle 9; 16.52 bits/word 2019-10-19 16:31:38 OpenCL args "-DEXP=77936867u -DWIDTH=1024u -DSMALL_HEIGHT=256u -DMIDDLE=9u -DWEIGHT_STEP=0x1.65cdc45f71f4cp+0 -DIWEIGHT_STEP=0x1.6e52dd530031p-1 -DWEIGHT_BIGSTEP=0x1.306fe0a31b715p+0 -DIWEIGHT_BIGSTEP=0x1.ae89f995ad3adp-1 -DORIG_X2=1 -I. -cl-fast-relaxed-math -cl-std=CL2.0" 2019-10-19 16:31:40 2019-10-19 16:31:40 OpenCL compilation in 1453 ms 2019-10-19 16:31:50 77936867 OK 1000 0.00%; 3929 us/sq; ETA 3d 13:04; 9711fce020e74461 (check 2.15s) 2019-10-19 16:32:39 Stopping, please wait.. 2019-10-19 16:32:41 77936867 OK 13500 0.02%; 3961 us/sq; ETA 3d 13:44; 350e9c68bedf46b6 (check 2.18s) 2019-10-19 16:32:41 Exiting because "stop requested" 2019-10-19 16:32:41 Bye[/CODE] |
[QUOTE=ATH;528301]Anyone has a compiled version of gpuowl that works on Colab and/or Kaggle?
Did anyone test if it was faster than CUDALucas?[/QUOTE] Any GPUOWL executable compiled on linux using rocm/nvidia driver (the latter I haven't tested) works just fine with colab. Since I personally had a linux system with rocm, compilation was as simple as invoking make in the terminal. I just simply popped the executable and a worktodo.txt file in the google drive, created a folder for all of them, and then it's crunching happily on those nvidia gpus. I am having a lot of trouble finding cudalucas linux executables, neither could i find the source code. It will be great if I test the speed of cudalucas on those K80s to compare with gpuowl performance. |
| All times are UTC. The time now is 22:56. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.