80,000 Hours Podcast

If digital minds could suffer, how would we ever know? (Article)


Listen Later

“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the model as it was under development, Lemoine became convinced it was sentient and worthy of moral consideration — and decided to tell the world.

Few experts in machine learning, philosophy of mind, or other relevant fields have agreed. And for our part at 80,000 Hours, we don’t think it’s very likely that large language models like LaMBDA are sentient — that is, we don’t think they can have good or bad experiences — in a significant way.

But we think you can’t dismiss the issue of the moral status of digital minds, regardless of your beliefs about the question. There are major errors we could make in at least two directions:

  • We may create many, many AI systems in the future. If these systems are sentient, or otherwise have moral status, it would be important for humanity to consider their welfare and interests.
  • It’s possible the AI systems we will create can’t or won’t have moral status. Then it could be a huge mistake to worry about the welfare of digital minds and doing so might contribute to an AI-related catastrophe.

And we’re currently unprepared to face this challenge. We don’t have good methods for assessing the moral status of AI systems. We don’t know what to do if millions of people or more believe, like Lemoine, that the chatbots they talk to have internal experiences and feelings of their own. We don’t know if efforts to control AI may lead to extreme suffering.

We believe this is a pressing world problem. It’s hard to know what to do about it or how good the opportunities to work on it are likely to be. But there are some promising approaches. We propose building a field of research to understand digital minds, so we’ll be better able to navigate these potentially massive issues if and when they arise.

This article narration by the author (Cody Fenwick) explains in more detail why we think this is a pressing problem, what we think can be done about it, and how you might pursue this work in your career. We also discuss a series of possible objections to thinking this is a pressing world problem.

You can read the full article, Understanding the moral status of digital minds, on the 80,000 Hours website.

Chapters:

  • Introduction (00:00:00)
  • Understanding the moral status of digital minds (00:00:58)
  • Summary (00:03:31)
  • Our overall view (00:04:22)
  • Why might understanding the moral status of digital minds be an especially pressing problem? (00:05:59)
  • Clearing up common misconceptions (00:12:16)
  • Creating digital minds could go very badly - or very well (00:14:13)
  • Dangers for digital minds (00:14:41)
  • Dangers for humans (00:16:13)
  • Other dangers (00:17:42)
  • Things could also go well (00:18:32)
  • We don't know how to assess the moral status of AI systems (00:19:49)
  • There are many possible characteristics that give rise to moral status: Consciousness, sentience, agency, and personhood (00:21:39)
  • Many plausible theories of consciousness could include digital minds (00:24:16)
  • The strongest case for the possibility of sentient digital minds: whole brain emulation (00:28:55)
  • We can't rely on what AI systems tell us about themselves: Behavioural tests, theory-based analysis, animal analogue comparisons, brain-AI interfacing (00:32:00)
  • The scale of this issue might be enormous (00:36:08)
  • Work on this problem is neglected but seems tractable: Impact-guided research, technical approaches, and policy approaches (00:43:35)
  • Summing up so far (00:52:22)
  • Arguments against the moral status of digital minds as a pressing problem (00:53:25)
  • Two key cruxes (00:53:31)
  • Maybe this problem is intractable (00:54:16)
  • Maybe this issue will be solved by default (00:58:19)
  • Isn't risk from AI more important than the risks to AIs? (01:00:45)
  • Maybe current AI progress will stall (01:02:36)
  • Isn't this just too crazy? (01:03:54)
  • What can you do to help? (01:05:10)
  • Important considerations if you work on this problem (01:13:00)
...more
View all episodesView all episodes
Download on the App Store

80,000 Hours PodcastBy Rob, Luisa, and the 80,000 Hours team

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

280 ratings


More shows like 80,000 Hours Podcast

View all
EconTalk by Russ Roberts

EconTalk

4,221 Listeners

Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,286 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,360 Listeners

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) by Sam Charrington

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

439 Listeners

The Joe Walker Podcast by Joe Walker

The Joe Walker Podcast

122 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

92 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

320 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

131 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

106 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

70 Listeners

"Upstream" with Erik Torenberg by Erik Torenberg

"Upstream" with Erik Torenberg

68 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

125 Listeners

The Studies Show by Tom Chivers and Stuart Ritchie

The Studies Show

62 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

103 Listeners

The Marginal Revolution Podcast by Mercatus Center at George Mason University

The Marginal Revolution Podcast

90 Listeners