
Sign up to save your podcasts
Or


Large language models (LLMs) are becoming increasingly more impressive at creating human-like text and answering questions, but whether they can understand the meaning of the words they generate is a hotly debated issue. A big challenge is that LLMs are black boxes; they can make predictions and decisions on the order of words, but they cannot communicate the reasons for doing so.
Ellie Pavlick at Brown University is building models that could help understand how LLMs process language compared with humans. In this episode of The Joy of Why, Pavlick discusses what we know and don’t know about LLM language processing, how their processes differ from humans, and how understanding LLMs better could also help us better appreciate our own capacity for knowledge and creativity.
By Steven Strogatz, Janna Levin and Quanta Magazine4.9
495495 ratings
Large language models (LLMs) are becoming increasingly more impressive at creating human-like text and answering questions, but whether they can understand the meaning of the words they generate is a hotly debated issue. A big challenge is that LLMs are black boxes; they can make predictions and decisions on the order of words, but they cannot communicate the reasons for doing so.
Ellie Pavlick at Brown University is building models that could help understand how LLMs process language compared with humans. In this episode of The Joy of Why, Pavlick discusses what we know and don’t know about LLM language processing, how their processes differ from humans, and how understanding LLMs better could also help us better appreciate our own capacity for knowledge and creativity.

759 Listeners

942 Listeners

327 Listeners

839 Listeners

563 Listeners

551 Listeners

236 Listeners

819 Listeners

1,066 Listeners

4,172 Listeners

2,370 Listeners

504 Listeners

253 Listeners

325 Listeners

18 Listeners

383 Listeners