mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware > GPU Computing

Reply
 
Thread Tools
Old 2020-02-05, 07:01   #78
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

22·2,939 Posts
Default

Ok, first gpuowl issue - my Haswell system has always been notoriously unstable, I get the Linux equivalent of BSOD ~2x per week, no overclocking, either. Just did a quick before-going-to-bed check, found it had done so sometime in last few hours. On reboot, starting my Mlucas job on the CPU was no problem, but trying to restart gpuowl (from within the run0 dir I created within the main gpuowl dir) hits this - file list shown at end:
Code:
ewmayer@ewmayer-haswell:~/gpuowl/run0$ ../gpuowl
2020-02-04 22:37:23 gpuowl v6.11-142-gf54af2e
2020-02-04 22:37:23 Note: not found 'config.txt'
2020-02-04 22:37:23 device 0, unique id ''
2020-02-04 22:37:24 gfx906+sram-ecc-0 103984877 FFT 5632K: Width 256x4, Height 64x4, Middle 11; 18.03 bits/word
2020-02-04 22:37:25 gfx906+sram-ecc-0 OpenCL args "-DEXP=103984877u -DWIDTH=1024u -DSMALL_HEIGHT=256u -DMIDDLE=11u -DWEIGHT_STEP=0x1.f54acc23489eep+0 -DIWEIGHT_STEP=0x1.0577e0c0e09e4p-1 -DWEIGHT_BIGSTEP=0x1.ae89f995ad3adp+0 -DIWEIGHT_BIGSTEP=0x1.306fe0a31b715p-1 -DAMDGPU=1  -I. -cl-fast-relaxed-math -cl-std=CL2.0"
1 warning generated.
2020-02-04 22:37:29 gfx906+sram-ecc-0 warning: argument unused during compilation: '-I .'

2020-02-04 22:37:29 gfx906+sram-ecc-0 OpenCL compilation in 3.90 s
2020-02-04 22:37:29 gfx906+sram-ecc-0 '/home/ewmayer/gpuowl/run0/103984877/103984877.owl' invalid
2020-02-04 22:37:30 gfx906+sram-ecc-0 103984877 OK 35000000 loaded: blockSize 400, 2c0ebcb44118e8be
2020-02-04 22:37:31 gfx906+sram-ecc-0 Can't open '/home/ewmayer/gpuowl/run0/103984877/103984877-new.owl' (mode 'wb')
2020-02-04 22:37:31 gfx906+sram-ecc-0 Exception NSt10filesystem7__cxx1116filesystem_errorE: filesystem error: can't open file: Success [/home/ewmayer/gpuowl/run0/103984877/103984877-new.owl]
2020-02-04 22:37:31 gfx906+sram-ecc-0 Bye

ewmayer@ewmayer-haswell:~/gpuowl/run0$ ll
total 80
drwxr-xr-x 3 ewmayer ewmayer  4096 Feb  4 14:41 ./
drwxr-xr-x 8 ewmayer ewmayer  4096 Feb  3 15:40 ../
drwxr-xr-x 2 root    root     4096 Feb  4 22:28 103984877/
-rw-r--r-- 1 ewmayer ewmayer 45684 Feb  4 22:37 gpuowl.log
-rw-r--r-- 1 ewmayer ewmayer   301 Feb  4 14:44 results.txt
-rw-r--r-- 1 root    root      181 Feb  4 14:41 worktodo.txt
-rw-r--r-- 1 root    root      244 Feb  4 13:58 worktodo.txt-bak

ewmayer@ewmayer-haswell:~/gpuowl/run0$ ll 103984877/
total 128216
drwxr-xr-x 2 root    root        4096 Feb  4 22:28 ./
drwxr-xr-x 3 ewmayer ewmayer     4096 Feb  4 14:41 ../
-rw-r--r-- 1 root    root    12998165 Feb  4 22:26 103984877-old.owl
-rw-r--r-- 1 root    root    12998155 Feb  4 14:17 103984877-old.p1.owl
-rw-r--r-- 1 root    root    46137398 Feb  4 14:38 103984877-old.p2.owl
-rw-r--r-- 1 root    root           0 Feb  4 22:28 103984877.owl
-rw-r--r-- 1 root    root    12998155 Feb  4 14:18 103984877.p1.owl
-rw-r--r-- 1 root    root    46137398 Feb  4 14:40 103984877.p2.owl
I notice the 0-sized .owl file is the primary backup, and there is no -new.owl file. But there is a -old.owl file last updated 2 mins before the .owl one, so I copied that to the .owl one and restarted ... no joy:
Code:
ewmayer@ewmayer-haswell:~/gpuowl/run0$ ../gpuowl
2020-02-04 22:52:21 gpuowl v6.11-142-gf54af2e
2020-02-04 22:52:21 Note: not found 'config.txt'
2020-02-04 22:52:21 device 0, unique id ''
2020-02-04 22:52:21 gfx906+sram-ecc-0 103984877 FFT 5632K: Width 256x4, Height 64x4, Middle 11; 18.03 bits/word
2020-02-04 22:52:22 gfx906+sram-ecc-0 OpenCL args "-DEXP=103984877u -DWIDTH=1024u -DSMALL_HEIGHT=256u -DMIDDLE=11u -DWEIGHT_STEP=0x1.f54acc23489eep+0 -DIWEIGHT_STEP=0x1.0577e0c0e09e4p-1 -DWEIGHT_BIGSTEP=0x1.ae89f995ad3adp+0 -DIWEIGHT_BIGSTEP=0x1.306fe0a31b715p-1 -DAMDGPU=1  -I. -cl-fast-relaxed-math -cl-std=CL2.0"
1 warning generated.
2020-02-04 22:52:26 gfx906+sram-ecc-0 warning: argument unused during compilation: '-I .'

2020-02-04 22:52:26 gfx906+sram-ecc-0 OpenCL compilation in 3.80 s
2020-02-04 22:52:26 gfx906+sram-ecc-0 103984877 OK 35000000 loaded: blockSize 400, 2c0ebcb44118e8be
2020-02-04 22:52:27 gfx906+sram-ecc-0 Can't open '/home/ewmayer/gpuowl/run0/103984877/103984877-new.owl' (mode 'wb')
2020-02-04 22:52:27 gfx906+sram-ecc-0 Exception NSt10filesystem7__cxx1116filesystem_errorE: filesystem error: can't open file: Success [/home/ewmayer/gpuowl/run0/103984877/103984877-new.owl]
2020-02-04 22:52:27 gfx906+sram-ecc-0 Bye
So then copied same -old.owl file to the -new.owl one ... still no joy:
Code:
ewmayer@ewmayer-haswell:~/gpuowl/run0$ ../gpuowl
2020-02-04 22:53:31 gpuowl v6.11-142-gf54af2e
2020-02-04 22:53:31 Note: not found 'config.txt'
2020-02-04 22:53:31 device 0, unique id ''
2020-02-04 22:53:32 gfx906+sram-ecc-0 103984877 FFT 5632K: Width 256x4, Height 64x4, Middle 11; 18.03 bits/word
2020-02-04 22:53:33 gfx906+sram-ecc-0 OpenCL args "-DEXP=103984877u -DWIDTH=1024u -DSMALL_HEIGHT=256u -DMIDDLE=11u -DWEIGHT_STEP=0x1.f54acc23489eep+0 -DIWEIGHT_STEP=0x1.0577e0c0e09e4p-1 -DWEIGHT_BIGSTEP=0x1.ae89f995ad3adp+0 -DIWEIGHT_BIGSTEP=0x1.306fe0a31b715p-1 -DAMDGPU=1  -I. -cl-fast-relaxed-math -cl-std=CL2.0"
1 warning generated.
2020-02-04 22:53:36 gfx906+sram-ecc-0 warning: argument unused during compilation: '-I .'

2020-02-04 22:53:36 gfx906+sram-ecc-0 OpenCL compilation in 3.76 s
2020-02-04 22:53:37 gfx906+sram-ecc-0 103984877 OK 35000000 loaded: blockSize 400, 2c0ebcb44118e8be
2020-02-04 22:53:38 gfx906+sram-ecc-0 Can't open '/home/ewmayer/gpuowl/run0/103984877/103984877-new.owl' (mode 'wb')
2020-02-04 22:53:38 gfx906+sram-ecc-0 Exception NSt10filesystem7__cxx1116filesystem_errorE: filesystem error: can't open file: Success [/home/ewmayer/gpuowl/run0/103984877/103984877-new.owl]
2020-02-04 22:53:38 gfx906+sram-ecc-0 Bye
Help!

In the meantime I simply deleted the current entry from worktodo.txt and restarted gpuowl on the next one.
ewmayer is offline   Reply With Quote
Old 2020-02-05, 07:08   #79
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

782410 Posts
Default

Quote:
Originally Posted by ewmayer View Post
Seeing those actual per-iter times on what was until an hour ago an aged, clunky 6-y.o. Haswell system is something else, that's for sure. Thanks, Mihai, for such a great program! It was nice to be able to upgrade the aforementioned aging system this way, got a lot of added-throughput bang for my hardware-purchase $
Gpuowl is certainly great for the price and Mihai due a lot of thanks. And let's also remember the contributions to its speed by Prime95, NVIDIA compatibility by Fan Ming, documentation and other contributions by SELROC and others.
Welcome to the gpu side. Old hardware with hefty power supplies increasingly look like homes just begging for fast gpus.
kriesel is online now   Reply With Quote
Old 2020-02-05, 07:55   #80
preda
 
preda's Avatar
 
"Mihai Preda"
Apr 2015

22×3×112 Posts
Default

That error message wants to say that gpuowl intends to *create* the file <n>-new.owl (to write to it a new checkpoint), and of course it's a fatal error if it can't do so. Why it can't create the file? maybe disk full, maybe wrong rights on the folder, maybe something else? Can you manually write to that path? as the same user as gpuowl?

Quote:
Originally Posted by ewmayer View Post
2020-02-04 22:37:31 gfx906+sram-ecc-0 Can't open '/home/ewmayer/gpuowl/run0/103984877/103984877-new.owl' (mode 'wb')
It seems the owner of the folder /home/ewmayer/gpuowl/run0/103984877/ is root.

Last fiddled with by preda on 2020-02-05 at 07:58
preda is offline   Reply With Quote
Old 2020-02-05, 08:01   #81
preda
 
preda's Avatar
 
"Mihai Preda"
Apr 2015

145210 Posts
Default

Quote:
Originally Posted by kriesel View Post
I think you don't. Gpuowl prints to both gpuowl.log and to console. On Windows the console output is not redirectable in my experience. Just dedicate a (virtual) terminal to it and move on.
on Linux you could redirect output to a file, or to /dev/null

./gpuowl options > /dev/null

Or, nohup will also redirect output to a file and keep the background process running after shell close:

nohup ./gpuowl options &
preda is offline   Reply With Quote
Old 2020-02-05, 08:10   #82
preda
 
preda's Avatar
 
"Mihai Preda"
Apr 2015

22·3·112 Posts
Default

I think the max sclk is 7, that being the default too. The card can't run for any amount of time on that sclk though due to overheating, thus thermally throttles *a lot* until it cools down, after which it speeds up again etc in an inefficient see-saw pattern.

While running PRP you could proceed to memory overclock tuning, usually 1150 is safe and can go up to 1180 or 1200. In general you want at least 24h without errors as validation.

I usually run at sclk 3 or lower, but never more than 4.
Quote:
GPU VDD SCLK MCLK Mem-used Mem-busy PWR FAN Temp PCIeErr
0 762mV 1243 1181 0.43GB 36% 129W 2004 64/79/72 2
1 781mV 1252 1161 0.43GB 37% 136W 1803 65/77/71 0
2 737mV 1251 1181 0.80GB 36% 124W 1805 63/76/71 0
The above values correspond to a bit under sclk 3 (between 2 and 3). I get 800us/it at 5M FFT. The total system power at the plug is 580W.

Quote:
Originally Posted by ewmayer View Post
Thanks - nice and simple. In the meantime I upped the fan setting to 150, then tried --setsclk with setting 3,4,5 - looks like 5 is the default, is that right?
Code:
--setsclk 5: 757 us/iter, temp = 70C, watts = 400 [~120 of those are baseline, including an ongoing 4-thread Mlucas job on the CPU]
--setsclk 4: 792 us/iter, temp = 65C, watts = 350
--setsclk 3: 848 us/iter, temp = 63C, watts = 300
So without fiddling the clocking, simply upping fanspeed to 150 dropped the temp from 80C to 70C. Downclocking cuts the wattage nicely, but it's hard to see what the effect on runtime is because the job I started is in p-1 stage 2. I'll update with effect of the above setting on per-iteration times once the job gets into PRP-test mode. [Edit: added per-iter to above table.]

Based on the results, I'll use '--setsclk 4' for now. Preda, can I expect any total-throughput boost from running 2 jobs per Matt's instructions, at the same settings?

Last fiddled with by preda on 2020-02-05 at 08:15
preda is offline   Reply With Quote
Old 2020-02-05, 08:26   #83
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

24·3·163 Posts
Default

Quote:
Originally Posted by preda View Post
on Linux you could redirect output to a file, or to /dev/null

./gpuowl options > /dev/null

Or, nohup will also redirect output to a file and keep the background process running after shell close:

nohup ./gpuowl options &
Attempts to redirect with append by >> on Google Colab, which is linux VMs, did not work, for background tasks, so that the VM could be monitored with top in the foreground.

Last fiddled with by kriesel on 2020-02-05 at 08:27
kriesel is online now   Reply With Quote
Old 2020-02-05, 20:38   #84
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

22×2,939 Posts
Default

Quote:
Originally Posted by preda View Post
That error message wants to say that gpuowl intends to *create* the file <n>-new.owl (to write to it a new checkpoint), and of course it's a fatal error if it can't do so. Why it can't create the file? maybe disk full, maybe wrong rights on the folder, maybe something else? Can you manually write to that path? as the same user as gpuowl?

It seems the owner of the folder /home/ewmayer/gpuowl/run0/103984877/ is root.
In my trying-to-restart-post-crash flailings, I was able to create the <n>.owl copy (as that file showed empty) of the <n>-old.owl file using "sudo cp", similar for try #2, "sudo cp <n>-old.owl <n>-new.owl" ... so 'sudo cp' allowed the file-copy, but left the ownership as root ... weird. Still learning the various subtle differences between using sudo and doing stuff as root.

So woke up this a.m., fan noise from the system was suspiciously quiet ... no crash, just the 'backup run' of the next assignment in the worktodo file quit due to p-1 stage 2 finding a factor. And I'd neglected to add more assignments to pad the worktodo file. Grr.

Anyhow, as root, I restored the 1*7 files-dir to its post-system-crash state, a valid-looking <n>-old.owl file, an empty <n>.owl file, and no <n>-new.owl file, then chown'ed the ownership to me-as-regular-user, restored the worktodo entry, and restarted ... still same error trying to create <n>-new.owl. But then saw that I'd forgotten to change the group of the files in question from root to me (i.e. my 'chown ewmayer *' should've been 'chown ewmayer:ewmayer *', so used 'sudo chgrp *' (equivalent to 'chown :ewmayer *') to do that, now restart is successful. Thanks for the help.

Quote:
Originally Posted by preda View Post
I think the max sclk is 7, that being the default too. The card can't run for any amount of time on that sclk though due to overheating, thus thermally throttles *a lot* until it cools down, after which it speeds up again etc in an inefficient see-saw pattern.
Yes, I noticed that last night during my post-crash restart of the backup assignment - wall wattage (again, 120W of which are baseline with Mlucas on the CPU) started at a whopping 450W, --setclk 5 lowered that to 400W, --setclk 4 to 350W.

Quote:
While running PRP you could proceed to memory overclock tuning, usually 1150 is safe and can go up to 1180 or 1200. In general you want at least 24h without errors as validation.

I usually run at sclk 3 or lower, but never more than 4.
On my R7, --showmclkrange shows a valid range of 808MHz - 2200MHz, and arg-less rocm-smi shows a default memory clocking of 1001Mhz ... to upclock that should I use --setmclk [level], or should I use --setmlevel MCLKLEVEL MCLK MVOLT (if the latter, lmk what 3 arg values I should use)?

Quote:
The above values correspond to a bit under sclk 3 (between 2 and 3). I get 800us/it at 5M FFT. The total system power at the plug is 580W.
That's a very nicely low wattage for 3 R7s plus system background. What temperature range do your cards run at? In your experience, what is the maximum safe temp for stable running?

Last fiddled with by ewmayer on 2020-02-05 at 20:41
ewmayer is offline   Reply With Quote
Old 2020-02-05, 20:53   #85
M344587487
 
M344587487's Avatar
 
"Composite as Heck"
Oct 2017

2·52·19 Posts
Default

IIRC the default temp target maintained by variable fan speed is 95 C and the "oh dear" territory is 105 C, I suggest anything lower than 90 C, depends how much tolerance you have for noise and wear and tear on the fans.
M344587487 is offline   Reply With Quote
Old 2020-02-05, 21:30   #86
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

2DEC16 Posts
Default

Quote:
Originally Posted by M344587487 View Post
IIRC the default temp target maintained by variable fan speed is 95 C and the "oh dear" territory is 105 C, I suggest anything lower than 90 C, depends how much tolerance you have for noise and wear and tear on the fans.
Currently getting a very manageable 70C with fan level override set at 120 ... interestingly, the ATX case here is so old/beat-up that it has no working case fans anymore, just the CPU fan and R7 fan array. I found that simply leaving the case side panel one removes to access the mobo off allows for good convective airflow, cooler room air enters the case though the open side and once warmed can easily escape through the upper-back and top-panel case-fan openings, as well as the top of the open side. The 2 fans at front of the case were never connected, so I have the option of pulling those and replacing the aforementioned pair of defunct fans with those, but so far it's hasn't proved necessary: your best case ventilation is that removed side panel. Plus it allows one to see the kewl red LEDs spelling out RADEON on the side of the R7 ... at night, it looks like the Vegas strip in there now. :)

Oh, Matt - do you agree with Preda's comment that single-job running with appropriately tuned fan and memclock settings now gives total throughput similar to the 2-job running your script sets up for? And would it be worthwhile updating your setup-guide post to reflect some of the issues I hit with my setup under Ubuntu 19.10? Specifically:

o Recent versions of GpuOwl need libgmp-dev to be installed;

o I needed to manually removed a bunch of nVidia package crud to get the system to properly recognize the R7;

o ROCm 3.0 breaks OpenCL, so if that is the current version shipping with one's distro, it needs to be reverted to 2.10 (or perhaps fiddle the pkg-install notes to get the latter from the start);

o If single-job running can now be done at more or less the same total throughput as 2-job, that part of the setup guide can be simplified.

Last fiddled with by ewmayer on 2020-02-05 at 22:10
ewmayer is offline   Reply With Quote
Old 2020-02-06, 10:09   #87
M344587487
 
M344587487's Avatar
 
"Composite as Heck"
Oct 2017

2·52·19 Posts
Default

Quote:
Originally Posted by ewmayer View Post
...
Oh, Matt - do you agree with Preda's comment that single-job running with appropriately tuned fan and memclock settings now gives total throughput similar to the 2-job running your script sets up for?
...
I trust preda that single job is now optimal, my info is outdated and gpuowl has been worked on heavily.


Quote:
Originally Posted by ewmayer View Post
...
And would it be worthwhile updating your setup-guide post to reflect some of the issues I hit with my setup under Ubuntu 19.10? Specifically:

o Recent versions of GpuOwl need libgmp-dev to be installed;

o I needed to manually removed a bunch of nVidia package crud to get the system to properly recognize the R7;

o ROCm 3.0 breaks OpenCL, so if that is the current version shipping with one's distro, it needs to be reverted to 2.10 (or perhaps fiddle the pkg-install notes to get the latter from the start);

o If single-job running can now be done at more or less the same total throughput as 2-job, that part of the setup guide can be simplified.
It would, that was never intended to be a robust guide but I will make it one. I was planning to wait until Ubuntu 20.04 was released and ROCm had rebased to it but I can do a small update now.


  • Install Ubuntu 19.10
  • Update if you've never updated before to shake off any gremlins:
    Code:
    sudo apt update && sudo apt upgrade
  • If an nvidia card is present remove it and uninstall nvidia drivers (AMD cards do not play nice with nvidia cards):
    Code:
    sudo apt remove --purge '^nvidia-.*' && sudo apt install ubuntu-desktop
  • Expose AMD GPU tuning in the kernel:
    • Add tuning flag to grub:
      Code:
      Edit /etc/default/grub to add amdgpu.ppfeaturemask=0xffffffff to GRUB_CMDLINE_LINUX_DEFAULT
    • Push changes:
      Code:
      sudo update-grub
  • Install required libs including GMP:
    Code:
    sudo apt install libnuma-dev libgmp-dev
  • Add ROCm 2.10 repository to your sources list:
    • Add ROCm GPG key for signed packages:
      Code:
      wget -qO - http://repo.radeon.com/rocm/apt/debian/rocm.gpg.key | sudo apt-key add -
    • Add 2.10 repo to sources (at time of writing there's a problem with the current latest version, 3.0):
      Code:
      echo 'deb [arch=amd64] http://repo.radeon.com/rocm/apt/2.10.0/ xenial main' | sudo tee /etc/apt/sources.list.d/rocm.list
  • Install ROCm using the upstream drivers, add current user to video group so that they can access the GPU and reboot:
    Code:
    sudo apt update && sudo apt install rocm-dev && echo 'SUBSYSTEM=="kfd", KERNEL=="kfd", TAG+="uaccess", GROUP="video"' | sudo tee /etc/udev/rules.d/70-kfd.rules && sudo shutdown -r now
  • At this point the GPU and ROCm should be installed and working. The following commands should show information about the card and the environment:
    Code:
    /opt/rocm/bin/rocm-smi
    /opt/rocm/opencl/bin/x86_64/clinfo
    /opt/rocm/opencl/bin/x86_64/rocminfo
    lspci
  • Download and build gpuowl:
    Code:
    git clone https://github.com/preda/gpuowl && cd gpuowl && make
  • Run gpuowl with no options to make sure it detects the card. It should also show the cards unique id
  • Start a PRP test to make sure it works, CTRL-C to cancel out
  • Setup is done. Now all you need to do is create a script you run on every reboot to tune the settings of the card. Bonus points if you make it a cron job. This is where my knowledge is outdated and I'll save researching it until Ubuntu 20.04 is viable:
    • Perhaps the unique id can be used to robustly and easily identify the card for tuning instead of groping around /sys?
    • At the very least have the card underclock for efficiency. Something along the lines of "rocm-smi --setsclk 3" using the unique id somehow as identifier
    • Memory overclock. Has this changed? I'm sure the old method still works but newer methods exist that may be more user friendly
    • Undervolt. Instead of the hacky "tweak max voltage on curve" there is a new way to be able to set the voltage on a per sclk/P-state basis. It may apply only to kernel 5.5+
M344587487 is offline   Reply With Quote
Old 2020-02-06, 11:12   #88
preda
 
preda's Avatar
 
"Mihai Preda"
Apr 2015

22×3×112 Posts
Default

Quote:
Originally Posted by ewmayer View Post
On my R7, --showmclkrange shows a valid range of 808MHz - 2200MHz, and arg-less rocm-smi shows a default memory clocking of 1001Mhz ... to upclock that should I use --setmclk [level], or should I use --setmlevel MCLKLEVEL MCLK MVOLT (if the latter, lmk what 3 arg values I should use)?
I don't have much experience with setting the mem frequency with rocm-smi, I was not aware of --setmlevel. In particular, when overclocking the mem, I was setting only the frequency but not the voltage. (I don't know if the mem voltage is different form the "sclk" voltage, and if so how to read the current mem voltage)

Anyway, maybe you could try something like:
--setmlevel 1 1150
and see if that has an effect on performance (expected: increase in perf) and on power (expected: small increase in power).

My Gpus usually run at under 85C. I think a max safe temperature is 102-105. Anyway in the region above 100 the GPU throttles, so I would try to keep it under 97 to avoid thermal-throttling. (the values above are for the "junction" temperature, which is the highest value of the three (edge, junction, mem)). The default fan curve keeps the GPU too hot, so I set a higher manual fan speed.
preda is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
AMD Radeon Pro WX 3200 ET_ GPU Computing 1 2019-07-04 11:02
Radeon Pro Vega II Duo (look at this monster) M344587487 GPU Computing 10 2019-06-18 14:00
What's the best project to run on a Radeon RX 480? jasong GPU Computing 0 2016-11-09 04:32
Radeon Pro Duo 0PolarBearsHere GPU Computing 0 2016-03-15 01:32
AMD Radeon R9 295X2 firejuggler GPU Computing 33 2014-09-03 21:42

All times are UTC. The time now is 15:06.


Fri Jul 7 15:06:31 UTC 2023 up 323 days, 12:35, 0 users, load averages: 1.29, 1.26, 1.19

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔