Plaintext

Google's Custom Inference Chip: Is Nvidia's Growth Story Over?


Listen Later

Google is in active talks with Marvell to co-develop a custom inference chip — a direct challenge to the revenue stream Nvidia has been selling Wall Street as its next growth engine. Custom inference silicon could be 30–40% more efficient than Nvidia GPUs at hyperscaler scale, translating to billions of dollars in annual savings for a single company. After this episode, you'll understand why the AI compute market is splitting into two distinct worlds, what that means for Nvidia's valuation, and which semiconductor players stand to win or lose as inference workloads go custom.
─────────────────────────────
LINKS & RESOURCES
▶ Amazon — AI & Finance Books: https://www.amazon.com/s?k=ai+finance+investing&tag=plaintext05-20
Books on AI, investing, and the future of money
▶ Amazon — Best Budget Microphones: https://www.amazon.com/s?k=budget+podcast+microphone&tag=plaintext05-20
Gear to level up your audio setup
(Some links above are affiliate links. We may earn a small commission at no cost to you.)
This episode was produced using AI-generated voices and scripting.
...more
View all episodesView all episodes
Download on the App Store

PlaintextBy Plaintext