mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware > Cloud Computing

Reply
Thread Tools
Old 2019-10-18, 10:21   #364
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

152C16 Posts
Default

Quote:
Originally Posted by LaurV View Post
Well, we should try to "come up with" running cudaLucas on it.
K80 is a waste if used for TF. This card is flying like a rocket at LL.
Well, leaving it idle is a waste.
Mfaktc kicks out about 400GhzD/day on a K80. EACH Colab K80.
But it's my understanding cudalucas on colab has been done.
And even better, so has gpuowl.
Just not by some of us.
Yet.
See post 29 for gpuowl, 178 for cudalucas.
kriesel is offline   Reply With Quote
Old 2019-10-18, 10:47   #365
axn
 
axn's Avatar
 
Jun 2003

2·3·7·112 Posts
Default

Quote:
Originally Posted by LaurV View Post
here the pain is to store and retrieve the checkpoint files
Use the drive, Luke
axn is offline   Reply With Quote
Old 2019-10-18, 12:35   #366
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

22·5·271 Posts
Default

Quote:
Originally Posted by axn View Post
BTDT. Got 3 DCs out of it. Major pain; Colab gets a conniption if you run it for long, and then you don't get GPU instance for a while.
Pretty fast, though. I estimated that if you run it full time, you could get about 60 GhzDay/day, which is pretty much in line with https://www.mersenne.ca/cudalucas.php
meh. Code it as mprime in the foreground via primenet, gpuowl as a subprocess. If the subprocess fails, oh well, try again next time around, and meanwhile mprime makes a little progress, plus hey, it's free, except for a few clicks & copy/paste every 12 hours plus delta. (cue chalsall: "Never send a human to do a machine's job." Who's up for scripting the restarts too, by something like Winbatch?)

It's all "Just for Fun" (tm). When it stops being fun, do something else for a while.
https://primes.utm.edu/bios/page.php?lastname=Woltman

Last fiddled with by axn on 2019-10-18 at 17:50 Reason: fix quote
kriesel is offline   Reply With Quote
Old 2019-10-18, 12:45   #367
pinhodecarlos
 
pinhodecarlos's Avatar
 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK

3×17×97 Posts
Default

Quote:
Originally Posted by kriesel View Post
Well, leaving it idle is a waste.
Mfaktc kicks out about 400GhzD/day on a K80. EACH Colab K80.
But it's my understanding cudalucas on colab has been done.
And even better, so has gpuowl.
Just not by some of us.
Yet.
See post 29 for gpuowl, 178 for cudalucas.
It’s not a waste, it’s avoided energy, which is good for Greta Thunberg.
pinhodecarlos is offline   Reply With Quote
Old 2019-10-18, 16:38   #368
ATH
Einyen
 
ATH's Avatar
 
Dec 2003
Denmark

2×1,579 Posts
Default

Anyone has a compiled version of gpuowl that works on Colab and/or Kaggle?

Did anyone test if it was faster than CUDALucas?
ATH is offline   Reply With Quote
Old 2019-10-18, 17:36   #369
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

9,767 Posts
Default

Quote:
Originally Posted by kriesel View Post
Who's up for scripting the restarts too, by something like Winbatch?)
I haven't had the cycles, but has anyone explored the Kaggle API yet?
chalsall is offline   Reply With Quote
Old 2019-10-18, 17:54   #370
axn
 
axn's Avatar
 
Jun 2003

2·3·7·112 Posts
Default

Quote:
Originally Posted by chalsall View Post
I haven't had the cycles, but has anyone explored the Kaggle API yet?
Not yet, but it is on my TODO. Launching 10 batches and harvesting their results twice a day is time consuming.
axn is offline   Reply With Quote
Old 2019-10-18, 18:01   #371
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

22·5·271 Posts
Default

Quote:
Originally Posted by kriesel View Post
Code it as mprime in the foreground via primenet, gpuowl as a subprocess.
Oops, wrong terminology. It's background and foreground.
kriesel is offline   Reply With Quote
Old 2019-10-18, 18:07   #372
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

22·5·271 Posts
Default

Quote:
Originally Posted by ATH View Post
Anyone has a compiled version of gpuowl that works on Colab and/or Kaggle?

Did anyone test if it was faster than CUDALucas?
Haven't done it myself on Colab yet or anything at all in Kaggle, but Mihai does his Gpuowl development on linux so the makefile should work well. Git clone, make, then optionally create a gpuowl config.txt. Then copy over from the Colab VM to a Google drive folder, and (re)use like other Colab gpu apps.

Direct gpu testing here on Windows has shown gpuowl is usually slightly faster than CUDALucas on the same GTX10xx gpu.

Last fiddled with by kriesel on 2019-10-18 at 18:11
kriesel is offline   Reply With Quote
Old 2019-10-19, 16:47   #373
Fan Ming
 
Oct 2019

10111112 Posts
Default

Quote:
Originally Posted by ATH View Post
Anyone has a compiled version of gpuowl that works on Colab and/or Kaggle?

Did anyone test if it was faster than CUDALucas?
I succeeded at compiling gpuowl on Colab after soving many compilation errors.
Attached is the compiled executable.
Steps for using this executable on Google Colab(Ignore if you already know):
1.Create a folder on Google drive named "gpuowl-master".
2.Upload attached executable "gpuowl.exe" in this folder.
3.Use this ipynb code(don't forget to turn GPU accelerate on):
Code:
import os.path
from google.colab import drive

if not os.path.exists('/content/drive/My Drive'):
  drive.mount('/content/drive')

%cd '/content/drive/My Drive/gpuowl-master/'

!cp 'gpuowl.exe' /usr/local/bin/
!chmod 755 '/usr/local/bin/gpuowl.exe'

!/usr/local/bin/gpuowl.exe -use ORIG_X2
It seems work well, here is the output information(manually stopped after I see it seems work well):
Code:
/content/drive/My Drive/gpuowl-master
2019-10-19 16:31:38 gpuowl 
2019-10-19 16:31:38 Note: no config.txt file found
2019-10-19 16:31:38 config: -use ORIG_X2 
2019-10-19 16:31:38 77936867 FFT 4608K: Width 256x4, Height 64x4, Middle 9; 16.52 bits/word
2019-10-19 16:31:38 OpenCL args "-DEXP=77936867u -DWIDTH=1024u -DSMALL_HEIGHT=256u -DMIDDLE=9u -DWEIGHT_STEP=0x1.65cdc45f71f4cp+0 -DIWEIGHT_STEP=0x1.6e52dd530031p-1 -DWEIGHT_BIGSTEP=0x1.306fe0a31b715p+0 -DIWEIGHT_BIGSTEP=0x1.ae89f995ad3adp-1 -DORIG_X2=1  -I. -cl-fast-relaxed-math -cl-std=CL2.0"
2019-10-19 16:31:40 

2019-10-19 16:31:40 OpenCL compilation in 1453 ms
2019-10-19 16:31:50 77936867 OK     1000   0.00%; 3929 us/sq; ETA 3d 13:04; 9711fce020e74461 (check 2.15s)
2019-10-19 16:32:39 Stopping, please wait..
2019-10-19 16:32:41 77936867 OK    13500   0.02%; 3961 us/sq; ETA 3d 13:44; 350e9c68bedf46b6 (check 2.18s)
2019-10-19 16:32:41 Exiting because "stop requested"
2019-10-19 16:32:41 Bye

Last fiddled with by Uncwilly on 2019-10-19 at 19:20
Fan Ming is offline   Reply With Quote
Old 2019-10-19, 17:33   #374
xx005fs
 
"Eric"
Jan 2018
USA

22·53 Posts
Default

Quote:
Originally Posted by ATH View Post
Anyone has a compiled version of gpuowl that works on Colab and/or Kaggle?

Did anyone test if it was faster than CUDALucas?
Any GPUOWL executable compiled on linux using rocm/nvidia driver (the latter I haven't tested) works just fine with colab. Since I personally had a linux system with rocm, compilation was as simple as invoking make in the terminal. I just simply popped the executable and a worktodo.txt file in the google drive, created a folder for all of them, and then it's crunching happily on those nvidia gpus.

I am having a lot of trouble finding cudalucas linux executables, neither could i find the source code. It will be great if I test the speed of cudalucas on those K80s to compare with gpuowl performance.

Last fiddled with by xx005fs on 2019-10-19 at 17:35
xx005fs is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
Alternatives to Google Colab kriesel Cloud Computing 11 2020-01-14 18:45
Notebook enzocreti enzocreti 0 2019-02-15 08:20
Computer Diet causes Machine Check Exception -- need heuristics help Christenson Hardware 32 2011-12-25 08:17
Computer diet - Need help garo Hardware 41 2011-10-06 04:06
Workunit diet ? dsouza123 NFSNET Discussion 5 2004-02-27 00:42

All times are UTC. The time now is 16:27.


Mon Aug 2 16:27:14 UTC 2021 up 10 days, 10:56, 0 users, load averages: 3.34, 2.70, 2.46

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.