AXRP - the AI X-risk Research Podcast

2 - Learning Human Biases with Rohin Shah


Listen Later

One approach to creating useful AI systems is to watch humans doing a task, infer what they're trying to do, and then try to do that well. The simplest way to infer what the humans are trying to do is to assume there's one goal that they share, and that they're optimally achieving the goal. This has the problem that humans aren't actually optimal at achieving the goals they pursue. We could instead code in the exact way in which humans behave suboptimally, except that we don't know that either. In this episode, I talk with Rohin Shah about his paper about learning the ways in which humans are suboptimal at the same time as learning what goals they pursue: why it's hard, how he tried to do it, how well he did, and why it matters.

Link to the paper - On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference: arxiv.org/abs/1906.09624

Link to the transcript: axrp.net/episode/2020/12/11/episode-2-learning-human-biases-rohin-shah.html

The Alignment Newsletter: rohinshah.com/alignment-newsletter

Rohin's contributions to the AI alignment forum: alignmentforum.org/users/rohinmshah

Rohin's website: rohinshah.com

...more
View all episodesView all episodes
Download on the App Store

AXRP - the AI X-risk Research PodcastBy Daniel Filan

  • 4.4
  • 4.4
  • 4.4
  • 4.4
  • 4.4

4.4

9 ratings


More shows like AXRP - the AI X-risk Research Podcast

View all
Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

512 Listeners