
Sign up to save your podcasts
Or


Pre-trained language models already contain vast knowledge—the challenge is producing the reasoning needed to handle ambiguous, multi-step tasks. Cognitive scientist Dr. Danielle Perszyk sits down with Amazon AI researcher, Meiqi Sun, to explore the shift from simple action execution to high-reasoning agents.
Drawing parallels to human cognitive development, they discuss how reinforcement learning enables models to generate and refine their own chains of thought rather than relying on rigid, human-written templates. Together, they unpack why teaching agents to reason requires the freedom to explore, struggle, and self-correct.
By Amazon AGI LabPre-trained language models already contain vast knowledge—the challenge is producing the reasoning needed to handle ambiguous, multi-step tasks. Cognitive scientist Dr. Danielle Perszyk sits down with Amazon AI researcher, Meiqi Sun, to explore the shift from simple action execution to high-reasoning agents.
Drawing parallels to human cognitive development, they discuss how reinforcement learning enables models to generate and refine their own chains of thought rather than relying on rigid, human-written templates. Together, they unpack why teaching agents to reason requires the freedom to explore, struggle, and self-correct.