
Sign up to save your podcasts
Or


In this conversation, Jay Goldberg and Austin Lyons delve into the emergence of Deep Seek, an AI lab that has gained attention for its innovative models and unique approach to AI development. They discuss the origins of Deep Seek, its self-funded nature, and the implications of its advancements in the context of geopolitical constraints. The conversation highlights the lab's offerings, including its reasoning models and mixture of experts, and explores how Deep Seek has managed to innovate despite hardware limitations. The discussion also touches on the future of AI scaling and the ongoing debate about the effectiveness of simply increasing computational resources. In this conversation, Austin Lyons and Jay Goldberg discuss the advancements in AI, particularly focusing on Deep Seek's contributions to scaling AI models, improving training efficiency, and the implications of these innovations on the market dynamics. They explore how Deep Seek has demonstrated that there are still many avenues for enhancing AI capabilities, despite the prevailing belief that the field has plateaued. The discussion also delves into the technical aspects of training and inference efficiency, the challenges faced by AI labs, and the importance of hardware optimization. Ultimately, they conclude that while Deep Seek is making significant strides, it does not pose a direct threat to established players like OpenAI.
By Ben Bajarin and Jay Goldberg4.9
3434 ratings
In this conversation, Jay Goldberg and Austin Lyons delve into the emergence of Deep Seek, an AI lab that has gained attention for its innovative models and unique approach to AI development. They discuss the origins of Deep Seek, its self-funded nature, and the implications of its advancements in the context of geopolitical constraints. The conversation highlights the lab's offerings, including its reasoning models and mixture of experts, and explores how Deep Seek has managed to innovate despite hardware limitations. The discussion also touches on the future of AI scaling and the ongoing debate about the effectiveness of simply increasing computational resources. In this conversation, Austin Lyons and Jay Goldberg discuss the advancements in AI, particularly focusing on Deep Seek's contributions to scaling AI models, improving training efficiency, and the implications of these innovations on the market dynamics. They explore how Deep Seek has demonstrated that there are still many avenues for enhancing AI capabilities, despite the prevailing belief that the field has plateaued. The discussion also delves into the technical aspects of training and inference efficiency, the challenges faced by AI labs, and the importance of hardware optimization. Ultimately, they conclude that while Deep Seek is making significant strides, it does not pose a direct threat to established players like OpenAI.

1,872 Listeners

1,086 Listeners

2,326 Listeners

289 Listeners

91 Listeners

500 Listeners

476 Listeners

108 Listeners

351 Listeners

260 Listeners

60 Listeners

49 Listeners

500 Listeners

96 Listeners

43 Listeners