(Brief pitch for a general audience, based on a 5-minute talk I gave.)
Let's talk about Reinforcement Learning (RL) agents as a possible path to Artificial General Intelligence (AGI)
My research focuses on “RL agents”, broadly construed. These were big in the 2010s—they made the news for learning to play Atari games, and Go, at superhuman level. Then LLMs came along in the 2020s, and everyone kinda forgot that RL agents existed. But I’m part of a small group of researchers who still thinks that the field will pivot back to RL agents, one of these days. (Others in this category include Yann LeCun and Rich Sutton & David Silver.)
Why do I think that? Well, LLMs are very impressive, but we don’t have AGI (artificial general intelligence) yet—not as I use the term. Humans can found and run companies, LLMs can’t. If you want a human to drive a car, you take an off-the-shelf human brain, the same human brain that was designed 100,000 years before cars existed, and give it minimal instructions and a week to mess around, and now they’re driving the car. If you want an AI to drive a car, it's … not that.
[...]
---
Outline:
(00:15) Let's talk about Reinforcement Learning (RL) agents as a possible path to Artificial General Intelligence (AGI)
(02:17) Reward functions in RL
(04:23) Reward functions in neuroscience
(05:25) We need a (far more robust) field of reward function design
(06:06) Oh man, are we dropping this ball
(07:30) Reward Function Design: Neuroscience research directions
(08:14) Reward Function Design: AI research directions
(08:46) Bigger picture
---