![]() |
|
|
#1 |
|
"/X\(‘-‘)/X\"
Jan 2013
https://pedan.tech/
24·199 Posts |
|
|
|
|
|
|
#2 |
|
Sep 2003
2·5·7·37 Posts |
In Amazon's description, the G3 instances are aimed at actual graphics applications, such as 3D rendering, video encoding and virtual reality. They use Tesla M60 GPUs.
For general-purpose GPU computation they provide the P2 instances, which are NVIDIA K80s. Those instances were introduced last September, here's an old thread. I would imagine the difference might be that the P2 instances would not produce any computational errors, whereas the G3 instances would tolerate pixel errors in their graphics output. The C4 instances (CPU only) have ECC memory and I'm not aware of any bad LL residue ever being produced by them (out of several thousand), so I think the P2s would be similar, whereas presumably the G3s might have a nonzero error rate. I don't know for sure. The P2s unfortunately are not even close to being cost-effective for LL testing, probably due to the ongoing heavy demand for things like machine learning applications. The G3s are currently a lot cheaper than the P2s, although the spot prices may go up in the coming weeks as AWS customers migrate their apps to the new instance type. Note the g3.4xlarge comes with one GPU and eight CPU cores ("16 vCPUs" means 16 hyperthreads) and you can run mprime on the CPUs in parallel with CUDALucas (or mfaktc) on the GPU. Note however that these CPUs are a slower clock speed than the ones used in the C4 (compute-optimized) instances. I think the most cost-effective option for LL testing on the cloud is still running multiple instances of the c4.large instance type. At current spot prices of about 1.6 cents an hour, you can run eight single-core c4.large instances for about 12.8 cents an hour (in us-east-2 region). On the other hand running one eight-cores-plus-GPU g3.4xlarge will be around 17 cents an hour. The difference, of about 4.2 cents an hour, would represent the cost of running a single CUDALucas on the GPU, and you could instead run two-and-a-half additional c4.large instances for that price. I don't know how fast CUDALucas is on the M60, though. The comparison is actually considerably more unfavorable for the G3 because i) running mprime on an N-core (virtual) machine is always less efficient than running N copies of mprime on N one-core (virtual) machines on N different physical servers, ii) the CPUs on the G3 instances have a significantly lower clock rate than the ones used in the C4 instances, so "eight CPU cores" vs. "eight CPU cores" is not an apples-to-apples comparison, iii) the spot price of the G3 instances may rise in the coming weeks and months as AWS customers start using it. TL;DR: probably still best to stick to home server farms if you want to use GPU-based programs for LL testing. Last fiddled with by GP2 on 2017-07-16 at 19:56 |
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| New Broadwell-EX Xeons | ATH | Hardware | 3 | 2017-02-28 01:18 |
| Issue with Broadwell-E and mprime? | Akujik | Information & Answers | 14 | 2016-08-05 09:16 |
| Broadwell Processor | firejuggler | Hardware | 57 | 2015-05-23 01:22 |
| Broadwell new instructions | tha | Hardware | 6 | 2014-07-18 00:08 |
| Intel formally announces Penryn processors | rx7350 | Hardware | 0 | 2008-01-08 15:35 |