
Sign up to save your podcasts
Or


This past year, we’ve witnessed considerable progress in the development of artificial intelligence, from the release of the image generators like DALL-E 2 to chat bots like ChatGPT and Cicero to a flurry of self-driving cars. So this week, we’re revisiting some of our favorite conversations about the rise of A.I. and what it means for the world.
Brian Christian’s “The Alignment Problem” is the best book on the key technical and moral questions of A.I. that I’ve read. At its center is the term from which the book gets its name. “Alignment problem” originated in economics as a way to describe the fact that the systems and incentives we create often fail to align with our goals. And that’s a central worry with A.I., too: that we will create something to help us that will instead harm us, in part because we didn’t understand how it really worked or what we had actually asked it to do.
So this conversation, originally recorded in June 2021 is about the various alignment problems associated with A.I. We discuss what machine learning is and how it works, how governments and corporations are using it right now, what it has taught us about human learning, the ethics of how humans should treat sentient robots, the all-important question of how A.I. developers plan to make profits, what kinds of regulatory structures are possible when we’re dealing with algorithms we don’t really understand, the way A.I. reflects and then supercharges the inequities that exist in our society, the saddest Super Mario Bros. game I’ve ever heard of, why the problem of automation isn’t so much job loss as dignity loss and much more.
Mentioned:
“Human-level control through deep reinforcement learning”
“Some Moral and Technical Consequences of Automation” by Norbert Wiener
Recommendations:
"What to Expect When You're Expecting Robots" by Julie Shah and Laura Major
"Finite and Infinite Games" by James P. Carse
"How to Do Nothing" by Jenny Odell
Thoughts? Email us at [email protected]. Guest suggestions? Fill out this form.
You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.
“The Ezra Klein Show” is produced by Annie Galvin, Jeff Geld and Rogé Karma; fact-checking by Michelle Harris; original music by Isaac Jones; mixing by Jeff Geld; audience strategy by Shannon Busta. Special thanks to Kristin Lin.
Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
By New York Times Opinion4.3
1357013,570 ratings
This past year, we’ve witnessed considerable progress in the development of artificial intelligence, from the release of the image generators like DALL-E 2 to chat bots like ChatGPT and Cicero to a flurry of self-driving cars. So this week, we’re revisiting some of our favorite conversations about the rise of A.I. and what it means for the world.
Brian Christian’s “The Alignment Problem” is the best book on the key technical and moral questions of A.I. that I’ve read. At its center is the term from which the book gets its name. “Alignment problem” originated in economics as a way to describe the fact that the systems and incentives we create often fail to align with our goals. And that’s a central worry with A.I., too: that we will create something to help us that will instead harm us, in part because we didn’t understand how it really worked or what we had actually asked it to do.
So this conversation, originally recorded in June 2021 is about the various alignment problems associated with A.I. We discuss what machine learning is and how it works, how governments and corporations are using it right now, what it has taught us about human learning, the ethics of how humans should treat sentient robots, the all-important question of how A.I. developers plan to make profits, what kinds of regulatory structures are possible when we’re dealing with algorithms we don’t really understand, the way A.I. reflects and then supercharges the inequities that exist in our society, the saddest Super Mario Bros. game I’ve ever heard of, why the problem of automation isn’t so much job loss as dignity loss and much more.
Mentioned:
“Human-level control through deep reinforcement learning”
“Some Moral and Technical Consequences of Automation” by Norbert Wiener
Recommendations:
"What to Expect When You're Expecting Robots" by Julie Shah and Laura Major
"Finite and Infinite Games" by James P. Carse
"How to Do Nothing" by Jenny Odell
Thoughts? Email us at [email protected]. Guest suggestions? Fill out this form.
You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.
“The Ezra Klein Show” is produced by Annie Galvin, Jeff Geld and Rogé Karma; fact-checking by Michelle Harris; original music by Isaac Jones; mixing by Jeff Geld; audience strategy by Shannon Busta. Special thanks to Kristin Lin.
Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

8,880 Listeners

38,430 Listeners

6,881 Listeners

3,917 Listeners

4,113 Listeners

1,491 Listeners

9,724 Listeners

2,066 Listeners

144 Listeners

87,868 Listeners

113,121 Listeners

2,380 Listeners

1,522 Listeners

12,630 Listeners

309 Listeners

7,244 Listeners

12,741 Listeners

5,832 Listeners

466 Listeners

51 Listeners

2,349 Listeners

380 Listeners

6,679 Listeners

5,576 Listeners

1,500 Listeners

11,013 Listeners

1,600 Listeners

3,538 Listeners

13 Listeners

632 Listeners

27 Listeners

0 Listeners