
Sign up to save your podcasts
Or


Large-scale AI models that enable next-generation applications like natural language processing and autonomous systems require intensive training and immense power. The monetary and environmental expense is too great.
This is where analog deep learning comes into play. The concept behind it is to develop a new type of hardware that can accelerate the training of neural networks, achieving a cheaper, more efficient, and more sustainable way to move forward with AI applications.
Murat Onen, a postdoctoral researcher in the Department of Electrical Engineering and Computer Science at MIT, explains.
Tune in to explore:
Press play for the full conversation.
Episode also available on Apple Podcasts: http://apple.co/30PvU9C
By Richard Jacobs4.2
494494 ratings
Large-scale AI models that enable next-generation applications like natural language processing and autonomous systems require intensive training and immense power. The monetary and environmental expense is too great.
This is where analog deep learning comes into play. The concept behind it is to develop a new type of hardware that can accelerate the training of neural networks, achieving a cheaper, more efficient, and more sustainable way to move forward with AI applications.
Murat Onen, a postdoctoral researcher in the Department of Electrical Engineering and Computer Science at MIT, explains.
Tune in to explore:
Press play for the full conversation.
Episode also available on Apple Podcasts: http://apple.co/30PvU9C

772 Listeners

378 Listeners

1,882 Listeners

7,233 Listeners

5,000 Listeners

1,535 Listeners

1,925 Listeners

1,703 Listeners

3,483 Listeners

9,238 Listeners

1,100 Listeners

839 Listeners

519 Listeners

295 Listeners

29,336 Listeners