
Sign up to save your podcasts
Or


In this episode, Françoise von Trapp speaks with Chee Ping Lee, of Lam Research, about the critical role of high bandwidth memory (HBM) in generative AI, emphasizing its high bandwidth and compact design.
HBM memory has received a lot of attention as one of the first technologies to implement 2.5D and 3D stacking. Lee explains how HBM uses advanced packaging technologies like TSV and microbumps to achieve high memory capacity and performance. Lam Research's solutions are key to HBM's success.
Listen to learn details about:
Contact Chee Ping Lee on LinkedIN
Learn more about why HBM Is a critical enabler for generative AI in this blog post.
Lam ResearchSupport the show
By Francoise von Trapp5
66 ratings
In this episode, Françoise von Trapp speaks with Chee Ping Lee, of Lam Research, about the critical role of high bandwidth memory (HBM) in generative AI, emphasizing its high bandwidth and compact design.
HBM memory has received a lot of attention as one of the first technologies to implement 2.5D and 3D stacking. Lee explains how HBM uses advanced packaging technologies like TSV and microbumps to achieve high memory capacity and performance. Lam Research's solutions are key to HBM's success.
Listen to learn details about:
Contact Chee Ping Lee on LinkedIN
Learn more about why HBM Is a critical enabler for generative AI in this blog post.
Lam ResearchSupport the show

3,211 Listeners

4,674 Listeners

417 Listeners

111,863 Listeners