
Sign up to save your podcasts
Or


Today I’m releasing a conversation with Tristan Harris.
Tristan is the founder of the Center for Humane Technology and one of the leading voices warning about how runaway AI might destabilize society. He starred in (and co-produced) the Netflix documentary The Social Dilemma.
I’ve known Tristan for a long time, and this is one of the best conversations we’ve ever had—public or private. I press him on major risk scenarios, what we can expect AI labs and legislators to do in the face of AGI, and what he thinks can actually be done right now to ensure these systems stay maximally beneficial to humanity.
I left this conversation a bit more hopeful understanding what kinds of solutions are available. I hope you do too.
In this episode we talk about:
* Creepy new AI capabilities: new models using unwitting humans to send encoded messages to other AIs.
* How it might all go down: a real-world near-term disaster scenario with runaway self-replicating AIs.
* How to bypass race dynamics with China and other powers accelerating AI capabilities.
* Designing systems for wisdom: alternative paths for designing and training Socratic AIs.
A small ask:
The irony is not lost on me in trying to critique the algorithms while still being dependent upon them to reach the right audience.
If you do enjoy the show, please share this episode with a friend and drop us a rating.
You can follow us here:
* Apple Podcasts
* YouTube
* Spotify
Thanks for listening, and please do subscribe.
-Tobias
Into The Machine is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
By Tobias Rose-Stockwell5
3030 ratings
Today I’m releasing a conversation with Tristan Harris.
Tristan is the founder of the Center for Humane Technology and one of the leading voices warning about how runaway AI might destabilize society. He starred in (and co-produced) the Netflix documentary The Social Dilemma.
I’ve known Tristan for a long time, and this is one of the best conversations we’ve ever had—public or private. I press him on major risk scenarios, what we can expect AI labs and legislators to do in the face of AGI, and what he thinks can actually be done right now to ensure these systems stay maximally beneficial to humanity.
I left this conversation a bit more hopeful understanding what kinds of solutions are available. I hope you do too.
In this episode we talk about:
* Creepy new AI capabilities: new models using unwitting humans to send encoded messages to other AIs.
* How it might all go down: a real-world near-term disaster scenario with runaway self-replicating AIs.
* How to bypass race dynamics with China and other powers accelerating AI capabilities.
* Designing systems for wisdom: alternative paths for designing and training Socratic AIs.
A small ask:
The irony is not lost on me in trying to critique the algorithms while still being dependent upon them to reach the right audience.
If you do enjoy the show, please share this episode with a friend and drop us a rating.
You can follow us here:
* Apple Podcasts
* YouTube
* Spotify
Thanks for listening, and please do subscribe.
-Tobias
Into The Machine is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

43,907 Listeners

6,795 Listeners

3,985 Listeners

10,728 Listeners

1,481 Listeners

371 Listeners

14,881 Listeners

1,052 Listeners

1,614 Listeners

5,522 Listeners

15,815 Listeners

417 Listeners

4,579 Listeners

4,249 Listeners

1,104 Listeners