CERIAS Weekly Security Seminar - Purdue University

Abulhair Saparov, Can/Will LLMs Learn to Reason?


Listen Later

Reasoning—the process of drawing conclusions from prior knowledge—is a hallmark of intelligence. Large language models, and more recently, large reasoning models have demonstrated impressive results on many reasoning-intensive benchmarks. Careful studies over the past few years have revealed that LLMs may exhibit some reasoning behavior, and larger models tend to do better on reasoning tasks. However, even the largest current models still struggle on various kinds of reasoning problems. In this talk, we will try to address the question: Are the observed reasoning limitations of LLMs fundamental in nature? Or will they be resolved by further increasing the size and data of these models, or by better techniques for training them? I will describe recent work to tackle this question from several different angles. The answer to this question will help us to better understand the risks posed by future LLMs as vast resources continue to be invested in their development. About the speaker: Abulhair Saparov is an Assistant Professor of Computer Science at Purdue University. His research focuses on applications of statistical machine learning to natural language processing, natural language understanding, and reasoning. His recent work closely examines the reasoning capacity of large language models, identifying fundamental limitations, and developing new methods and tools to address or workaround those limitations. He has also explored the use of symbolic and neurosymbolic methods to both understand and improve the reasoning capabilities of AI models. He is also broadly interested in other applications of statistical machine learning, such as to the natural sciences.

...more
View all episodesView all episodes
Download on the App Store

CERIAS Weekly Security Seminar - Purdue UniversityBy CERIAS <[email protected]>

  • 4.1
  • 4.1
  • 4.1
  • 4.1
  • 4.1

4.1

7 ratings