The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

What’s Next in LLM Reasoning? with Roland Memisevic - #646

09.11.2023 - By Sam CharringtonPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

Today we’re joined by Roland Memisevic, a senior director at Qualcomm AI Research. In our conversation with Roland, we discuss the significance of language in humanlike AI systems and the advantages and limitations of autoregressive models like Transformers in building them. We cover the current and future role of recurrence in LLM reasoning and the significance of improving grounding in AI—including the potential of developing a sense of self in agents. Along the way, we discuss Fitness Ally, a fitness coach trained on a visually grounded large language model, which has served as a platform for Roland’s research into neural reasoning, as well as recent research that explores topics like visual grounding for large language models and state-augmented architectures for AI agents.

The complete show notes for this episode can be found at twimlai.com/go/646.

More episodes from The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)