The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Training Data Locality and Chain-of-Thought Reasoning in LLMs with Ben Prystawski - #673

02.26.2024 - By Sam CharringtonPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

Today we’re joined by Ben Prystawski, a PhD student in the Department of Psychology at Stanford University working at the intersection of cognitive science and machine learning. Our conversation centers on Ben’s recent paper, “Why think step by step? Reasoning emerges from the locality of experience,” which he recently presented at NeurIPS 2023. In this conversation, we start out exploring basic questions about LLM reasoning, including whether it exists, how we can define it, and how techniques like chain-of-thought reasoning appear to strengthen it. We then dig into the details of Ben’s paper, which aims to understand why thinking step-by-step is effective and demonstrates that local structure is the key property of LLM training data that enables it.

The complete show notes for this episode can be found at twimlai.com/go/673.

More episodes from The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)