
Sign up to save your podcasts
Or


Large Language Models might sound smart, but can they predict what happens when a cat sees a cucumber? In this episode, host Emily Laird throws LLMs into the philosophical ring with World Models, AI systems that learn from watching, poking, and pushing stuff around (kind of like toddlers). Meta’s Yann LeCun isn’t impressed by chatbots, and honestly, he might have a point. We break down why real intelligence might need both brains and brawn—or at least a good sense of gravity.
Connect with Emily Laird on LinkedIn
By Emily Laird4.6
1919 ratings
Large Language Models might sound smart, but can they predict what happens when a cat sees a cucumber? In this episode, host Emily Laird throws LLMs into the philosophical ring with World Models, AI systems that learn from watching, poking, and pushing stuff around (kind of like toddlers). Meta’s Yann LeCun isn’t impressed by chatbots, and honestly, he might have a point. We break down why real intelligence might need both brains and brawn—or at least a good sense of gravity.
Connect with Emily Laird on LinkedIn

333 Listeners

152 Listeners

211 Listeners

197 Listeners

154 Listeners

227 Listeners

610 Listeners

274 Listeners

106 Listeners

54 Listeners

173 Listeners

57 Listeners

146 Listeners

62 Listeners

24 Listeners