
Sign up to save your podcasts
Or


Training a deep learning model involves operations over tensors. A tensor is a multi-dimensional array of numbers. For several years, GPUs were used for these linear algebra calculations. That’s because graphics chips are built to efficiently process matrix operations.
Tensor processing consists of linear algebra operations that are similar in some ways to graphics processing–but not identical. Deep learning workloads do not run as efficiently on these conventional GPUs as they would on specialized chips, built specifically for deep learning.
In order to train deep learning models faster, new hardware needs to be designed with tensor processing in mind.
Xin Wang is a data scientist with the artificial intelligence products group at Intel. He joins today’s show to discuss deep learning hardware and Flexpoint, a way to improve the efficiency of space that tensors take up on a chip. Xin presented his work at NIPS, the Neural Information Processing Systems conference, and we talked about what he saw at NIPs that excited him. Full disclosure: Intel, where Xin works, is a sponsor of Software Engineering Daily.
The post Deep Learning Hardware with Xin Wang appeared first on Software Engineering Daily.
By Machine Learning Archives - Software Engineering Daily4.4
6969 ratings
Training a deep learning model involves operations over tensors. A tensor is a multi-dimensional array of numbers. For several years, GPUs were used for these linear algebra calculations. That’s because graphics chips are built to efficiently process matrix operations.
Tensor processing consists of linear algebra operations that are similar in some ways to graphics processing–but not identical. Deep learning workloads do not run as efficiently on these conventional GPUs as they would on specialized chips, built specifically for deep learning.
In order to train deep learning models faster, new hardware needs to be designed with tensor processing in mind.
Xin Wang is a data scientist with the artificial intelligence products group at Intel. He joins today’s show to discuss deep learning hardware and Flexpoint, a way to improve the efficiency of space that tensors take up on a chip. Xin presented his work at NIPS, the Neural Information Processing Systems conference, and we talked about what he saw at NIPs that excited him. Full disclosure: Intel, where Xin works, is a sponsor of Software Engineering Daily.
The post Deep Learning Hardware with Xin Wang appeared first on Software Engineering Daily.

289 Listeners

1,756 Listeners

479 Listeners

625 Listeners

585 Listeners

302 Listeners

214 Listeners

334 Listeners

773 Listeners

988 Listeners

269 Listeners

211 Listeners

203 Listeners

201 Listeners

227 Listeners