
Sign up to save your podcasts
Or


On this episode, we’re joined by Andrew Feldman, Founder and CEO of Cerebras Systems. Andrew and the Cerebras team are responsible for building the largest-ever computer chip and the fastest AI-specific processor in the industry.
We discuss:
- The advantages of using large chips for AI work.
- Cerebras Systems’ process for building chips optimized for AI.
- Why traditional GPUs aren’t the optimal machines for AI work.
- Why efficiently distributing computing resources is a significant challenge for AI work.
- How much faster Cerebras Systems’ machines are than other processors on the market.
- Reasons why some ML-specific chip companies fail and what Cerebras does differently.
- Unique challenges for chip makers and hardware companies.
- Cooling and heat-transfer techniques for Cerebras machines.
- How Cerebras approaches building chips that will fit the needs of customers for years to come.
- Why the strategic vision for what data to collect for ML needs more discussion.
Resources:
Andrew Feldman - https://www.linkedin.com/in/andrewdfeldman/
Cerebras Systems - https://www.linkedin.com/company/cerebras-systems/
Cerebras Systems | Website - https://www.cerebras.net/
Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#OCR #DeepLearning #AI #Modeling #ML
By Lukas Biewald4.8
6868 ratings
On this episode, we’re joined by Andrew Feldman, Founder and CEO of Cerebras Systems. Andrew and the Cerebras team are responsible for building the largest-ever computer chip and the fastest AI-specific processor in the industry.
We discuss:
- The advantages of using large chips for AI work.
- Cerebras Systems’ process for building chips optimized for AI.
- Why traditional GPUs aren’t the optimal machines for AI work.
- Why efficiently distributing computing resources is a significant challenge for AI work.
- How much faster Cerebras Systems’ machines are than other processors on the market.
- Reasons why some ML-specific chip companies fail and what Cerebras does differently.
- Unique challenges for chip makers and hardware companies.
- Cooling and heat-transfer techniques for Cerebras machines.
- How Cerebras approaches building chips that will fit the needs of customers for years to come.
- Why the strategic vision for what data to collect for ML needs more discussion.
Resources:
Andrew Feldman - https://www.linkedin.com/in/andrewdfeldman/
Cerebras Systems - https://www.linkedin.com/company/cerebras-systems/
Cerebras Systems | Website - https://www.cerebras.net/
Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#OCR #DeepLearning #AI #Modeling #ML

538 Listeners

1,087 Listeners

302 Listeners

333 Listeners

226 Listeners

211 Listeners

95 Listeners

511 Listeners

131 Listeners

227 Listeners

610 Listeners

33 Listeners

35 Listeners

21 Listeners

39 Listeners