
Sign up to save your podcasts
Or


In this must-listen episode, we dive deep into Broadcom’s groundbreaking Tomahawk 6 — the world’s first 102.4 Tbps Ethernet switch chip, purpose-built to unleash the next generation of AI, machine learning, and high-performance computing (HPC) networks.
Join Kamran Naqvi, Chief Network Architect at Broadcom, as he breaks down why AI is fundamentally a distributed compute problem and how Ethernet — not InfiniBand — is emerging as the dominant fabric for scaling to 100,000+ GPU clusters and beyond.
From congestion control innovations to cognitive routing and co-packaged optics, this episode unveils how Broadcom is redefining the modern networking stack for massive AI workloads.
Why AI, ML, and HPC demand ultra-efficient, high-bandwidth networking
Real-world challenges: flow entropy, tail latency, and RDMA limitations
Packet spraying vs. cognitive routing vs. global load balancing
How Tomahawk 6 enables 512-GPU scale-up & 100K-GPU scale-out with 2-tier Ethernet fabrics
Breakthroughs in power efficiency and congestion control
What the Ultra Ethernet Consortium (UEC) 1.0 specification means for the industry
Ethernet vs. InfiniBand: performance benchmarks, power savings, and scalability
A forward look at Ethernet architectures for 1M+ GPU clusters
Network architects, AI/ML engineers, datacenter strategists, cloud infrastructure leaders, and anyone shaping the future of high-speed networking.
We’re here for you: [email protected]
🌐 Visit us: www.stordis.com
💻 Blog: https://stordis.com/blog/
📱 Facebook: https://www.facebook.com/people/STORDIS-GmbH/100057058555819/
📸 Instagram: https://www.instagram.com/stordis_open_networking/
👥 LinkedIn: https://www.linkedin.com/company/stordis/
🐦 X: https://twitter.com/STORDIS_GmbH/
🔍 What You’ll Learn in This Episode🎯 Who Should Listen?📬 Need Support or Want to Learn More?🔗 Follow STORDIS for More AI & Networking Insights
By STORDIS GmbHIn this must-listen episode, we dive deep into Broadcom’s groundbreaking Tomahawk 6 — the world’s first 102.4 Tbps Ethernet switch chip, purpose-built to unleash the next generation of AI, machine learning, and high-performance computing (HPC) networks.
Join Kamran Naqvi, Chief Network Architect at Broadcom, as he breaks down why AI is fundamentally a distributed compute problem and how Ethernet — not InfiniBand — is emerging as the dominant fabric for scaling to 100,000+ GPU clusters and beyond.
From congestion control innovations to cognitive routing and co-packaged optics, this episode unveils how Broadcom is redefining the modern networking stack for massive AI workloads.
Why AI, ML, and HPC demand ultra-efficient, high-bandwidth networking
Real-world challenges: flow entropy, tail latency, and RDMA limitations
Packet spraying vs. cognitive routing vs. global load balancing
How Tomahawk 6 enables 512-GPU scale-up & 100K-GPU scale-out with 2-tier Ethernet fabrics
Breakthroughs in power efficiency and congestion control
What the Ultra Ethernet Consortium (UEC) 1.0 specification means for the industry
Ethernet vs. InfiniBand: performance benchmarks, power savings, and scalability
A forward look at Ethernet architectures for 1M+ GPU clusters
Network architects, AI/ML engineers, datacenter strategists, cloud infrastructure leaders, and anyone shaping the future of high-speed networking.
We’re here for you: [email protected]
🌐 Visit us: www.stordis.com
💻 Blog: https://stordis.com/blog/
📱 Facebook: https://www.facebook.com/people/STORDIS-GmbH/100057058555819/
📸 Instagram: https://www.instagram.com/stordis_open_networking/
👥 LinkedIn: https://www.linkedin.com/company/stordis/
🐦 X: https://twitter.com/STORDIS_GmbH/
🔍 What You’ll Learn in This Episode🎯 Who Should Listen?📬 Need Support or Want to Learn More?🔗 Follow STORDIS for More AI & Networking Insights