Super Data Science: ML & AI Podcast with Jon Krohn

691: A.I. Accelerators: Hardware Specialized for Deep Learning

06.27.2023 - By Jon KrohnPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

GPUs vs CPUs, chip design and the importance of chips in AI research: This highly technical episode is for anyone who wants to learn what goes into chip development and how to get into the competitive industry of accelerator design. With advice from expert guest Ron Diamant, Senior Principal Engineer at AWS, you’ll get a breakdown of the need-to-know technical terms, what chip engineers need to think about during the design phase and what the future holds for processing hardware.

This episode is brought to you by Posit, the open-source data science company (https://posit.co), by the AWS Insiders Podcast (https://pod.link/1608453414), and by https://WithFeeling.ai, the company bringing humanity into AI. Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information.

In this episode you will learn:

• What CPUs and GPUs are [05:29]

• The differences between accelerators used for deep learning [14:31]

• Trainium and Inferentia: AWS's A.I. Accelerators [22:10]

• If model optimizations will lead to lower demand for hardware to process them [43:14]

• How a chip designer goes about production [48:34]

• Breaking down the technical terminology for chips (accelerator interconnect, dynamic execution, collective communications) [55:29]

• The importance of AWS Neuron, a software development kit [1:15:42]

• How Ron got his foot in the door with chip design [1:26:40]

Additional materials: www.superdatascience.com/691

More episodes from Super Data Science: ML & AI Podcast with Jon Krohn