View Single Post
Old 2020-07-03, 13:13   #2
tServo
 
tServo's Avatar
 
"Marv"
May 2009
near the Tannhäuser Gate

5×109 Posts
Default

Quote:
Originally Posted by M344587487 View Post
https://fuse.wikichip.org/news/3600/...pphire-rapids/


https://en.wikichip.org/wiki/x86/amx#Instructions


Page 89:


https://software.intel.com/content/w...reference.html


What do people make of this new x86 extension for prime hunting? Separate to AVX512, "AI-specific" matrix operations. Accelerating int8 and bf16 matrix operations doesn't look too promising as if it was a thing someone probably would've made a program to take advantage of nvidia's tensor-flow hardware, but I know jack.
Nvidia has had these in Cuda since the Pascal microarchitecture ( 2016 ).
Ampere microarchitecture introduces their 4th generation of tensor cores
They mostly appeal to folks doing AI since they greatly reduce training time and reduce latency when trying to get answers from a trained network.

Intel, once again, is left scrambling to play catch-up.
tServo is offline   Reply With Quote