View Single Post
Old 2020-07-03, 13:13   #2
tServo's Avatar
May 2009
near the Tannhäuser Gate

54 Posts

Originally Posted by M344587487 View Post

Page 89:

What do people make of this new x86 extension for prime hunting? Separate to AVX512, "AI-specific" matrix operations. Accelerating int8 and bf16 matrix operations doesn't look too promising as if it was a thing someone probably would've made a program to take advantage of nvidia's tensor-flow hardware, but I know jack.
Nvidia has had these in Cuda since the Pascal microarchitecture ( 2016 ).
Ampere microarchitecture introduces their 4th generation of tensor cores
They mostly appeal to folks doing AI since they greatly reduce training time and reduce latency when trying to get answers from a trained network.

Intel, once again, is left scrambling to play catch-up.
tServo is offline   Reply With Quote