mersenneforum.org How to set up for running gpuOwl under Ubuntu (and other Linux) with OpenCL
 Register FAQ Search Today's Posts Mark Forums Read

2020-06-10, 23:33   #1
ewmayer
2ω=0

Sep 2002
República de California

2·5·1,163 Posts
How to set up for running gpuOwl under Ubuntu (and other Linux) with OpenCL

Moderator Note: Post #1 of this thread is intended to provide, step by step, everything needed by a user wanting to do what the thread title states, starting with the procedure for creating a Ubuntu boot-image USB. Later comments are tasked with noting specific (and hopefully small) differences needed for e.g. other Linux distros and specific GPU models, and may be folded into the OP as warranted. Post #1 will be continually maintained and updated so as to stay current.

Thanks to Xyzzy for handholding me through the boot-image procedure, M344587487 for the original version of the gpuowl-setup recipe and the Radeon VII settings-tweak shell script, and all the various forumites (preda, paulunderwood, Prime95, etc) who helped the OP with this stuff when he purchased his first GPU, a Radeon VII, early in 2020. I have only tried the recipe out on one other GPU model, a Radeon 540 in an Intel NUC, there I sucessful built gpuowl but was unable to run it due to an issue of OpenCL not recognizing that GPU model. So feedback regarding whether it works - or how to make it work - with other GPUs and Linux distros is needed, and welcome.

Creating a Ubuntu boot-image USB: If you already have such a boot-image USB, you can skip to the next section. Note in the following all mount/umount/fdisk commands except of the informational kind must be done as root or using the 'sudo' prefix command.

Technical note: Both cp and dd do faithful byte-copy of a file, thus e.g. md5/sha1 will agree between original and copy. But dd copies to address-offset 0 on the target filesystem, because that is where bootloaders expects a boot image to start. And dd copies a file as a single contiguous block, whereas cp copies to wherever it finds a good spot, and use filesystem magic to link noncontiguous fragments into what looks like a single file to the outside world.

0. Go to the list of currently-supported Ubuntu releases and download the .iso file of the one you want. In my most-recent case I grabbed the 19.10 "64-bit PC (AMD64) desktop image" .iso file, and my notes will use that as an example;

1. Insert a usb stick into an existing linux or MacOs system. Many Linux distros will auto-mount USB storage media, but for boot-disk-creation, we must make sure it is *not* mounted. To see the mount point, use the linux lsblk command. E.g. on my 2015-vintage Intel NUC the USB was auto-mounted as /dev/sdb1, with mount point /media/ewmayer, ls of which showed a files-containing directory ... 'umount /dev/sdb /media/ewmayer' left 'ls -l /media/ewmayer' showing just . and .., no more directory entry. You need to be careful to specify both the block device (/dev/sd*) and the specific mount point of the USB, since it is common to have multiple filesystems sharing the same block device. I'll replace my 'sdb' with a generic 'sdX' and let users properly fill in for the 'X'.

2. Clear the usb stick - note this is slow and linear-time in the size of the storage medium, so it pays to use the smallest USB needed to store the ISO file. The trailing bs= option overrides the default blocksize-to-write, 512 bytes, with a much larger 1MB, which should speed things significantly:

sudo dd if=/dev/zero of=/dev/sdX bs=1M

The completion message looks scary but is simply an expected 'hit end of fs' message (note if your system hangs for, say more than a minute after printing the "No space left on device" message, you may need to ctrl-c it). Your numbers will be different, but in my case I saw this:

failed to open 'dev/sdb': No space left on device
31116289+0 records in
31116289+0 records out
15931539456 bytes (16 GB) copied, 3842.03 s, 4.1 MB/s [using newer 16GB USB, needed just 1566 s, 10.2 MB/s]

3. use dd to copy the .iso file. As dd is a low-level utility, no re-mount of the stick filesystem is needed/wanted, and my example again assumes the USB is mounted at /dev/sdX, with the user supplying the 'X':

sudo dd if=[Full path to ISO file, no wildcarding permitted] of=/dev/sdX bs=1M oflag=sync

On completion, 'sudo fdisk -l /dev/sdX' shows /dev/sdX1 as bootable (the * under 'Boot') and 'Empty'. In my case it also showed a nonbootable partition at /dev/sdb2, which we can ignore:
Code:
	Device     Boot    Start     End Sectors  Size Id Type
/dev/sdb1  *           0 4812191 4812192  2.3G  0 Empty
/dev/sdb2        4073124 4081059    7936  3.9M ef EFI (FAT-12/16/32)
Oddly, in the above the start of sdb2 lies inside the sdb1 range, but that appears to be ignorable. I've used the same boot-USB to install Ubuntu on multiple devices, without any problems.

In my resulting files-view window the previous contents of the USB had vanished and been replaced by 'Ubuntu 19.10 amd64', which showed 10 dirs - [boot,casper,dists,EFI,install,isolinux,pics,pool,preseed,ubuntu] - and 2 files, md5sum.txt [34.8 kB] and README.diskdefines [225 bytes].

4. After copying the .iso, the USB may or may not (this is OS-dependent) end up mounted on /dev/sdX1. To be sure unmount the filesystem with 'sudo umount /dev/sdX1'. (If it was not left so mounted, you'll simply get a "umount: /dev/sdX1: not mounted" error message.) Remove the stick from the system used to burn the .iso, and after doing any needed file backups of the target system, insert the stick into that, reboot and, at the appropriate prompt, press <f1> to enter the Boot Options menu.

5. Fiddle the boot order in the target system BIOS to put the USB at #1 (note this may not be needed, so feel free to first try starting from here:) then shut down, insert bootable USB, power up, use the up/down-arrow keys to scroll through resulting boot-options menu, which includes items like "try without installing" and "install alongside existing OS installation". I chose "install now". Next it detected an existing Debian install, asked if I wanted to keep ... this was on a mere 30GB SSD, 2 installs too cramped, so chose Ubuntu-only. 5 mins later: done, restarting ... "Please remove the installation medium, then press ENTER:".

6. If you fiddled the boot order in the BIOS in the preceding step, the next time you reboot, use the BIOS to move the hard drive back to #1 boot option.

Installing and setting up for gpuowl running:

o 'sudo passwd root' to set root pwd [I make same as user-pwd on my I-am-sole-user systems]
o sudo apt update
o sudo apt install -y gcc gdb python libgmp-dev git ssh openssh-server clinfo libncurses5 libnuma-dev
[some of these like gcc/gdb should already be installed as part of the OS install; optional nice-to-haves include the multitail and screen packages]
o sudo update-grub
o echo 'deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main' | sudo tee /etc/apt/sources.list.d/rocm.list
o sudo apt update && sudo apt install rocm-dev
o Add yourself to the video group. There are 2 options for doing this:
1. The AMD rocm installation guide suggests using the command 'sudo usermod -a -G video $LOGNAME' 2. Should [1] fail for some reason, add yourself manually: echo 'SUBSYSTEM=="kfd", KERNEL=="kfd", TAG+="uaccess", GROUP="video"' | sudo tee /etc/udev/rules.d/70-kfd.rules o reboot o git clone https://github.com/preda/gpuowl && cd gpuowl && make ['clone' only on initial setup - subsequent updates can use 'git pull' from within the existing gpuowl-dir: 'cd ~/gpuowl && git pull https://github.com/preda/gpuowl && make'] Queueing up work and reporting results: o Read through the README.md file for basic background on running the code and various command-line options. To queue up GIMPS work, from within the gpuowl executable directory, run './tools/primenet.py -u [primenet uid] -p [primenet pwd] -w [your preferred worktype] --tasks [number of assignments to fetch] &'. This will periodically run an automated python work-management script to grab new work and report any results generated since the last such run of the script. On my R7 I generally choose '-w PRP --tasks 10', since --tasks does not differentiate based on task type, e.g. if my current worktodo has, say, 5 p-1 jobs queued up, the work-fetch will only grab 5 new PRP assignments. I do weekly results-checkins/new-work-fetches and even running 2 jobs per card as suggested below for the R7, each PRP assignment completes in under 40 hours, thus I want at least 5 PRP assignments queued up at all times. Note that for PRP and LL-test assignments needing some prior p-1 trial factoring, the program will automatically split the original PRP or LL assignment into 2, inserting a p-1 one ("PFactor=...") before the PRP/LL one. Thus an original worktodo.txt file consisting of 10 new PRP assignments might end up with as many as 20 assignments, consisting of 10 such Pfactor/PRP pairs. o Once the worktodo.txt file has been created and populated with 1 or more assignments, start the program: 'sudo ./gpuowl' should be all that is needed for most users. That will run 1 instance in the terminal in "live progress display" mode; to run in silent background mode or to manage more than one instance from a terminal session, prepend 'nohup' (this diverts all the ensuing screen output to the nohup.out file) and append ' &' to the program name. To target a specific device on a multi-GPU system, use the '-d [device id]' flag, with numeric device id taken from the output of the /opt/rocm/bin/rocm-smi command. Use -maxAlloc to avoid out-of-memory with multi-jobs per card: If you run multiple gpuowl instances per card as suggested in general for both performance and should-one-job-crash reasons, you need to take care to add '-maxAlloc [(0.9)*(Card memory in MB)/(#instances)]' to your program-invocation command line. That limits the program instances to using at most 90% of the card HBM in total; without it, if your multiple jobs happen to find themselves in the memory-hungry stage 2 of p-1 factoring at the same time, since OpenCL does not provide a reliable "how much HBM remains available" functionality, they will combine to allocate more memory than is on the card, causing them to swap out and slow to a crawl. The default amount (around 90% of what is available on the card in question) gpuowl uses per job is well into the "diminishing returns" part of the stage 2 memory-vs-speed equation for typical modern cards having multi-gigabytes of HBM, so limiting the mem-alloc thusly should not incur a noticeable performance penalty, especially compared to the nearly-infinite performance penalty resulting from the above-described out-of-memory state. Another good reason to run 2 instances per card - even on cards where this does not give a total-throughput boost - is fault insurance. For example, shortly after midnight last night one of the 2 jobs I had running on the R7 in my Haswell system coredumped with this obscure internal-fault message: double free or corruption (!prev) Aborted (core dumped) No problem - Run #2 continued merrily on its way, the only hit to total-throughput was the single-digit-percentage one resulting from switching from 2-job to 1-job mode on this card. As soon as I saw what had happened on checking the status of my runs this morning, I restarted the aborted job with no problems. Had I been running just 1 job, a whole night's computing would have been lost. Radeon VII specific: o On R7, to maximize throughput you want to run 2 instances per card - in my experience this gives a roughly 6-8% total-throughput boost. I find the easiest way to do this is to create 2 subdirs under the gpuowl-dir, say run0 and run1 for card 0, cd into each and use '../tools/primenet.py [above options] &' to queue up work, and '../gpuowl [-flags] -maxAlloc 7500 &' to start a run. If managing work remotely I precede each of the executable invocations with 'nohup', and use multitail -N 2 ~/gpuowl/run*/*log' to view the latest progress of my various runs. o To maximize throughput per watt and keep card temperatures reasonable, you'll want to manually adjust each card's SCLK and MCLK settings - on my single-card system I get best FLOPS/Watt while avoiding huge-slowdown-levels of downclocking via the following bash script, which must be executed as root: Code: #!/bin/bash # EWM: This is a basic single-GPU setup script ... customize to suit: if [ "$EUID" -ne 0 ]; then echo "Radeon VII init script needs to be executed as root" && exit; fi

#Allow manual control
echo "manual" >/sys/class/drm/card0/device/power_dpm_force_performance_level
#Undervolt by setting max voltage
#               V Set this to 50mV less than the max stock voltage of your card (which varies from card to card), then optionally tune it down
echo "vc 2 1801 1010" >/sys/class/drm/card0/device/pp_od_clk_voltage
#Overclock mclk to 1200
echo "m 1 1200" >/sys/class/drm/card0/device/pp_od_clk_voltage
#Push a dummy sclk change for the undervolt to stick
echo "s 1 1801" >/sys/class/drm/card0/device/pp_od_clk_voltage
#Push everything to the card
echo "c" >/sys/class/drm/card0/device/pp_od_clk_voltage
#Put card into desired performance level
/opt/rocm/bin/rocm-smi --setsclk 3 --setfan 120
Setting SCLK = 3 rather than 4 saves ~50W with a modest ~6% timing hit; going to SCLK = 2 saves another 50W but incurs a further ~15% timing hit. If you find overclocking MCLK to 1200 is unstable (gives 'EE' error-line outputs and possibly causes the run to halt), try a lower 1150 - I've found that to be the maximum safe setting based on what works on all 4 of my R7s. You'll want to use rocm-smi to monitor the temp of your various cards and adjust the settings as needed.

o On my 3-R7 system, I use the elaborated setup script copied in the attachment to this post. Note the inline comments re. sclk and fan settings, and the actual job-start commands at end of the file, which put 2 jobs on each card. After a system reboot, I only need to do a single 'sudo bash *sh' to be up and running.

o Mihai Preda comments on running on multiple R7s:
"I find running gpuowl with -uid <16-hex-char id> much more useful than running with -d <position> .
This way the identity of the card is preserved even when swapping it around the PCIe slots.
And the script tools/device.py can be used to convert the UID to -d "position" for rocm-smi ."

Troubleshooting: To be filled in. Sample items include:

o If you installed an OpenCL-supporting GPU on a system which previously had an nVidia one, you may need to remove the nVidia drivers like so... . Such a previous-card install may also have left one or more /sys/class/drm/card* entries, which if they exist mean that the card-settings-init script below needs to have its 'card0' entries fiddled to replace 0 with the most-recently-added (= largest) index in the list of /sys/class/drm/card* entries.

o If you get a files-owned-by-root error on an attempted work-fetch using primenet.py, do 'sudo chown -R [uid]:[iud] *' and manually append the downloaded (but not written to the worktodo.txt file) assignments to worktodo.txt .

o If using 'screen' and working remotely, once gpuowl is up and running, detach screen (ctrl-a --> d) prior to logout.

o For subsequent ROCm-updates, use the following sequence:

sudo apt autoremove rocm-dev

[George adds: "I don't think these 2 are required, but I don't see how they'd hurt:
echo 'deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main' | sudo tee /etc/apt/sources.list.d/rocm.list
]
sudo apt update
sudo apt install libncurses5|clinfo|rocm-dev
[reboot]
Attached Files

Last fiddled with by ewmayer on 2021-02-14 at 22:59 Reason: Add 'oflag=sync' to .iso-creation [thanks, Mike]; add AMD-rec'd way to add self to video group [thanks, George]

2020-06-11, 00:21   #2
Xyzzy

"Mike"
Aug 2002

22×13×157 Posts

Quote:
 Originally Posted by ewmayer sudo dd if=/dev/zero of=/dev/sdX
We only do this when we get a new USB stick. We usually use badblocks -s -t random -v -w /dev/sdX because it verifies that each memory location is valid. Most of the time you can get away with not checking the stick but we feel it is cheap insurance.
Quote:
 Originally Posted by ewmayer sudo dd if=[Full path to ISO file, no wildcarding permitted] of=/dev/sdX
You can greatly speed up this process by adding bs=1M to the end. Or 8M. Or something like that. By default it does the work in 512 byte chunks so there is a lot of overhead.

Edit1: Here is a link for people who want to create the boot USB stick on a Windows computer.

https://ubuntu.com/tutorials/tutoria...ows#1-overview

Edit2: lsblk is a much easier way to see attached block devices. It also shows you if and where the device is mounted.
Code:
$lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 259:0 0 953.9G 0 disk └─nvme0n1p1 259:1 0 953.9G 0 part / <<< INSERT USB STICK >>> Code: $ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    1  28.9G  0 disk
└─sda1        8:1    1  28.9G  0 part /run/media/m/stuff
nvme0n1     259:0    0 953.9G  0 disk
└─nvme0n1p1 259:1    0 953.9G  0 part /

$umount /dev/sda1$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    1  28.9G  0 disk
└─sda1        8:1    1  28.9G  0 part
nvme0n1     259:0    0 953.9G  0 disk
└─nvme0n1p1 259:1    0 953.9G  0 part /

 2020-08-11, 03:45 #3 Prime95 P90 years forever!     Aug 2002 Yeehaw, FL 3×11×227 Posts AMD rocm installation guide suggests this command sudo usermod -a -G video $LOGNAME to add yourself to the video group  2020-10-20, 03:50 #4 DrobinsonPE Aug 2020 23·11 Posts I have a computer, Ryzen 3200G, that I got the windows version of GPUOWL v6.11-364 running on. See here: https://www.mersenneforum.org/showpo...postcount=2471. I removed the windows hard drive and reinstalled the linux hard drive, upgraded to Linux Mint 20 and followed the instructions above for Installing and setting up for gpuowl running: Everything went well until the last line git clone https://github.com/preda/gpuowl && cd gpuowl && make it spit out a lot of text and ended with Code: g++ -o gpuowl Pm1Plan.o util.o B1Accumulator.o Memlock.o log.o GmpUtil.o Worktodo.o common.o main.o Gpu.o clwrap.o Task.o checkpoint.o timeutil.o Args.o state.o Signal.o FFTConfig.o AllocTrac.o gpuowl-wrap.o sha3.o md5.o -lstdc++fs -lOpenCL -lgmp -pthread -L/opt/rocm-3.3.0/opencl/lib/x86_64 -L/opt/rocm-3.1.0/opencl/lib/x86_64 -L/opt/rocm/opencl/lib/x86_64 -L/opt/amdgpu-pro/lib/x86_64-linux-gnu -L. /usr/bin/ld: cannot find -lOpenCL collect2: error: ld returned 1 exit status make: *** [Makefile:19: gpuowl] Error 1 drobinson@3200G:~/gpuowl$ From what I can tell, the make did not happen. Can someone please give me some advice on what to do next? This is the same thing that happened the first time I tried about 6 months ago and the reason why I installed Windows on the computer in an attempt to see if it was a hardware incompatibility issue. I prefer Linux over Windows so I am trying to get both GPUOWL and mfakto to work on Linux. I have a feeling that it is something I am doing wrong because every time I try to learn a new program I always find all of the PEBCAK issues before I get the program working.
2020-10-20, 05:56   #5
phillipsjk

Nov 2019

2×5×7 Posts

Quote:
 Originally Posted by DrobinsonPE git clone https://github.com/preda/gpuowl && cd gpuowl && make it spit out a lot of text and ended with Code: g++ -o gpuowl Pm1Plan.o util.o B1Accumulator.o Memlock.o log.o GmpUtil.o Worktodo.o common.o main.o Gpu.o clwrap.o Task.o checkpoint.o timeutil.o Args.o state.o Signal.o FFTConfig.o AllocTrac.o gpuowl-wrap.o sha3.o md5.o -lstdc++fs -lOpenCL -lgmp -pthread -L/opt/rocm-3.3.0/opencl/lib/x86_64 -L/opt/rocm-3.1.0/opencl/lib/x86_64 -L/opt/rocm/opencl/lib/x86_64 -L/opt/amdgpu-pro/lib/x86_64-linux-gnu -L. /usr/bin/ld: cannot find -lOpenCL collect2: error: ld returned 1 exit status make: *** [Makefile:19: gpuowl] Error 1 drobinson@3200G:~/gpuowl$ From what I can tell, the make did not happen. Can someone please give me some advice on what to do next? If you are going to be compiling instead of installing binaries, you often need to install development packages, which essentially just include the needed headers. Often named libraryname-dev. So if you have the opencl package installed, you may need the opencl-dev package as well. OK I checked, http://packages.linuxmint.com/search.php?release=any§ion=any&keyword=opencl does not have opencl packages for AMD cards. What the instructions tell you to do is install the "rocm-dev" packages from a foreign repository. You should check that the version of Mint you are using is compatible with the version of Ubuntu exected by the repo. Code: # echo 'deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main' | sudo tee /etc/apt/sources.list.d/rocm.list # sudo apt update && sudo apt install rocm-dev I believe the first line simply creates a file in /etc/apt/sources.list.d/rocm.list with the contents 'deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main' You may want to examine your sources.list.d directory to see if the format is consistent. the 'xenial' codename may be incorrect, causing the entry to be ignored ('main' should then possibly pull it in, may not work if you are on 'testing') The second line tells 'apt' to update the package lists, then install rocm-dev. Did that complete properly? Grr: just checked the apt(8) manpage on my devuan installation looks like it got GNU-fied :P Quote:  Originally Posted by apt Much like apt itself, its manpage is intended as an end user interface and as such only mentions the most used commands and options partly to not duplicate information in multiple places and partly to avoid overwhelming readers with a cornucopia of options and details. Apparently checking the installation status of a package is not a common command :P My pet peve with the GNU project is removing all of the man pages to encourage the use of the info command instead. The info command is complicated enough that I have to look up how to use it every time "info info", then I forget what I was looking up in the first place! (BSD has good man pages, but possibly less bleeding edge hardware support). Edit: if the linking step is failing, maybe you need to install 'rocm' as well: Code: # sudo apt-get install rocm Last fiddled with by phillipsjk on 2020-10-20 at 05:59 Reason: install rocm suggestion added. 2020-10-20, 07:03 #6 M344587487 "Composite as Heck" Oct 2017 3·5·53 Posts Quote:  Originally Posted by DrobinsonPE ... From what I can tell, the make did not happen. Can someone please give me some advice on what to do next? ... The latest ROCm changed the OpenCL library path, until the gpuowl repo accounts for the change you'll need to edit the LIBPATH line in the Makefile to include -L/opt/rocm/opencl/lib 2020-10-20, 07:53 #7 kruoli "Oliver" Sep 2017 Porta Westfalica, DE 2×13×19 Posts Quote:  Originally Posted by phillipsjk The info command is complicated enough that I have to look up how to use it every time "info info", then I forget what I was looking up in the first place! Argh! "info info" is broken on my distribution. Bad sign. Apparently, that's known since 18.04 but still not fixed for me? Anyways, I'm drifting off. 2020-10-21, 02:01 #8 DrobinsonPE Aug 2020 23·11 Posts Quote:  Originally Posted by M344587487 The latest ROCm changed the OpenCL library path, until the gpuowl repo accounts for the change you'll need to edit the LIBPATH line in the Makefile to include -L/opt/rocm/opencl/lib Thank you! Editing the makefile worked. I now have a compiled gpuowl v7.0-54-g8aadeed-dirty and I can get an output from gpuowl -h. Unfortunately, I now need to figure out why it is not finding my igpu. Code: drobinson@3200G:~/gpuowl$ /home/drobinson/gpuowl/gpuowl
2020-10-20 18:38:21 device 0, unique id ''
2020-10-20 18:38:21 Exception gpu_error: DEVICE_NOT_FOUND clGetDeviceIDs(platforms[i], kind, 64, devices, &n) at clwrap.cpp:77 getDeviceIDs
2020-10-20 18:38:21 Bye
clinfo also does not seem to find the igpu.

Code:
drobinson@3200G:~$clinfo Number of platforms 1 Platform Name AMD Accelerated Parallel Processing Platform Vendor Advanced Micro Devices, Inc. Platform Version OpenCL 2.0 AMD-APP (3186.0) Platform Profile FULL_PROFILE Platform Extensions cl_khr_icd cl_amd_event_callback Platform Extensions function suffix AMD Platform Name AMD Accelerated Parallel Processing Number of devices 0 I am making progress but still have a few puzzles to figure out. Also, I was looking for GPUOWL v6.11 because it looks like v7.0 is still experimental so I need to figure out how to clone an earlier version. The edit to the makefile needs to be added to the first post so that the next person to follow the instructions knows what to do. 2020-10-21, 08:32 #9 M344587487 "Composite as Heck" Oct 2017 79510 Posts Quote:  Originally Posted by DrobinsonPE ...I am making progress but still have a few puzzles to figure out. Also, I was looking for GPUOWL v6.11 because it looks like v7.0 is still experimental so I need to figure out how to clone an earlier version. "git switch v6" to switch to the v6 branch, "git switch master" to go back to the currently v7 branch. Until the Makefile edit is in the repo you'll need to do "git checkout -f" to drop the manual edit when pulling updates to master then re-edit the Makefile. Quote:  Originally Posted by DrobinsonPE ... The edit to the makefile needs to be added to the first post so that the next person to follow the instructions knows what to do. I'll see if I can dust off a github account and make a pull request so it's no longer an issue. APU support is lacking with ROCm but you can get it to work with the OpenCL part we need. To do that from the upstream install you should have from following the first post try: Code: sudo apt autoremove rocm-dev sudo apt install rocm-dkms sudo usermod -a -G video$LOGNAME
sudo usermod -a -G render $LOGNAME sudo reboot This converts your ROCm install from using upstream to using AMD's latest release, the difference is that upstream relies on drivers that have made it into the kernel and I'm assuming Mint 20 is using kernel 5.4 which is probably too early for decent APU support as it's the most recent development. The usermod lines may not be necessary but they also won't hurt. As a future note if AMD ever releases a new GPU worth a damn for compute we're going to need to use AMD's drivers for a while for ease. It's the bridge between a new card needing recent developments and ROCm only playing nice with older LTS kernels. All that means roughly speaking is installing rocm-dkms instead of rocm-dev with potentially a few twiddly bits like the usermod lines. OP's install guide is based on the following link with upstream substituted for AMD's drivers, for an AMD driver install just follow this standard guide: https://rocmdocs.amd.com/en/latest/I...de.html#ubuntu Last fiddled with by M344587487 on 2020-10-21 at 09:08 Reason: A typo in the apt install line, not pretty 2020-10-21, 10:18 #10 preda "Mihai Preda" Apr 2015 101010011112 Posts Quote:  Originally Posted by DrobinsonPE The edit to the makefile needs to be added to the first post so that the next person to follow the instructions knows what to do. No big deal, that libpath was already added to Makefile. (in master only)  2020-11-07, 18:48 #11 DrobinsonPE Aug 2020 23·11 Posts After months of trying, I finally got GPUOWL working on Linux. The instructions above did not work with multiple tries on multiple systems. This possibly is because I am learning as I go and keep making mistakes in the installation. I did learn a lot about how to install GPUOWL including how to use git and what programs needed to be installed on the computer to install ROCm and make GPUOWL from the instructions above. Here is what worked: Install ROCm following the instructions here: https://rocmdocs.amd.com/en/latest/I...ion-Guide.html This included using the following commands: o sudo apt update o sudo apt dist-upgrade o sudo apt install libnuma-dev o sudo reboot o wget -q -O - https://repo.radeon.com/rocm/rocm.gpg.key | sudo apt-key add - o echo 'deb [arch=amd64] https://repo.radeon.com/rocm/apt/debian/ xenial main' | sudo tee /etc/apt/sources.list.d/rocm.list o sudo apt update o sudo apt install rocm-dkms && sudo reboot o sudo usermod -a -G video$LOGNAME o sudo usermod -a -G render $LOGNAME o sudo reboot Check installation worked with the following: o /opt/rocm/bin/rocminfo o /opt/rocm/opencl/bin/clinfo Install GPUOWL following the instructions here: https://github.com/preda/gpuowl This included using the following commands: o sudo apt install git o sudo apt install libgmp-dev o sudo apt install gcc o git clone https://github.com/preda/gpuowl && cd gpuowl && make Make a worktodo.txt (in linux do not forget to add the .txt to the end of the file name) and add an assignment. I still have not figured out what to put in a config.txt yet. - future homework. Computer: ASRock Deskmini A300W, AMD A8-9600, 16GB DDR-4, SSD Prgrams: Linux Mint 20, ROCm v3.9, GPUOWL v7.2-16-gla50f11 This is what it has displayed so far. It has not displayed any progress yet so I am not sure it is actually working. I will post in the "gpuOwL: an OpenCL program for Mersenne primality testing" thread if it start showing progress. Code: drobinson@A8-9600:~/gpuowl$ /home/drobinson/gpuowl/gpuowl 2020-11-07 09:19:41 GpuOwl VERSION v7.2-16-g1a50f11 2020-11-07 09:19:41 GpuOwl VERSION v7.2-16-g1a50f11 2020-11-07 09:19:41 Note: not found 'config.txt' 2020-11-07 09:19:41 device 0, unique id '' 2020-11-07 09:19:41 gfx801-0 100406741 FFT: 5.50M 1K:11:256 (17.41 bpw) 2020-11-07 09:19:41 gfx801-0 100406741 OpenCL args "-DEXP=100406741u -DWIDTH=1024u -DSMALL_HEIGHT=256u -DMIDDLE=11u -DAMDGPU=1 -DWEIGHT_STEP_MINUS_1=0.5051841309934193 -DIWEIGHT_STEP_MINUS_1=-0.33562945595234156 -DIWEIGHTS={0,-0.33562945595234156,-0.11722356040363666,-0.41350933655290917,-0.22070575769356826,-0.48225986026566819,-0.31205740337878252,-0.0859024056184058,-0.39270048390804446,-0.19305618018821558,-0.46389029541574911,-0.2876490077922636,-0.053469967508113697,-0.37115332735591766,-0.16442558794578249,-0.4448689732712372,} -cl-std=CL2.0 -cl-finite-math-only " Last fiddled with by DrobinsonPE on 2020-11-07 at 19:05 Reason: grammar

 Similar Threads Thread Thread Starter Forum Replies Last Post preda GpuOwl 2707 2021-05-11 00:48 Prime95 GpuOwl 13 2020-01-03 22:44 xilman Software 2 2019-11-14 09:19 xx005fs GpuOwl 0 2019-07-26 21:37 VictordeHolland Linux 4 2018-04-11 13:44

All times are UTC. The time now is 06:17.

Wed May 12 06:17:37 UTC 2021 up 34 days, 58 mins, 0 users, load averages: 1.60, 1.67, 1.71

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.