View Single Post
Old 2019-09-08, 18:47   #36
xx005fs
 
"Eric"
Jan 2018
USA

22×53 Posts
Default

Quote:
Originally Posted by hansl View Post
Just curious why gpuowl and not CUDALucas? I haven't tried either yet so I hardly know anything about them, but I guess I would assume that CUDA would generally beat OpenCL on nvidia GPUs?
I personally think the fact that PRP has a reliable error check algorithm makes it superior to LL. Secondly, gpuowl seems to run faster on Nvidia cards that are bandwidth starved (for example my Titan V is significantly faster on gpuowl than CUDALucas, down from 1.12ms/it to 0.83ms/it with the switch. The Tesla K80 is no exception since it has a 1:3 FP32:FP64 rate but a relatively low memory bandwidth, the same should apply.), and obviously I would want maximum throughput. I would try CUDALucas sometimes if it's faster than gpuowl on the K80.
xx005fs is offline   Reply With Quote