
Sign up to save your podcasts
Or


Our guest in this episode is Nate Soares, President of the Machine Intelligence Research Institute, or MIRI.
MIRI was founded in 2000 as the Singularity Institute for Artificial Intelligence by Eliezer Yudkowsky, with support from a couple of internet entrepreneurs. Among other things, it ran a series of conferences called the Singularity Summit. In 2012, Peter Diamandis and Ray Kurzweil, acquired the Singularity Summit, including the Singularity brand, and the Institute was renamed as MIRI.
Nate joined MIRI in 2014 after working as a software engineer at Google, and since then he’s been a key figure in the AI safety community. In a blogpost at the time he joined MIRI he observed “I turn my skills towards saving the universe, because apparently nobody ever got around to teaching me modesty.”
MIRI has long had a fairly pessimistic stance on whether AI alignment is possible. In this episode, we’ll explore what drives that view—and whether there is any room for hope.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Inspiring Tech Leaders - The Technology PodcastListen on: Apple Podcasts Spotify
By London Futurists4.7
99 ratings
Our guest in this episode is Nate Soares, President of the Machine Intelligence Research Institute, or MIRI.
MIRI was founded in 2000 as the Singularity Institute for Artificial Intelligence by Eliezer Yudkowsky, with support from a couple of internet entrepreneurs. Among other things, it ran a series of conferences called the Singularity Summit. In 2012, Peter Diamandis and Ray Kurzweil, acquired the Singularity Summit, including the Singularity brand, and the Institute was renamed as MIRI.
Nate joined MIRI in 2014 after working as a software engineer at Google, and since then he’s been a key figure in the AI safety community. In a blogpost at the time he joined MIRI he observed “I turn my skills towards saving the universe, because apparently nobody ever got around to teaching me modesty.”
MIRI has long had a fairly pessimistic stance on whether AI alignment is possible. In this episode, we’ll explore what drives that view—and whether there is any room for hope.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Inspiring Tech Leaders - The Technology PodcastListen on: Apple Podcasts Spotify

32,068 Listeners

15,228 Listeners

884 Listeners

26,343 Listeners

501 Listeners

369 Listeners

4,160 Listeners

319 Listeners

197 Listeners

498 Listeners

5,488 Listeners

139 Listeners

542 Listeners

589 Listeners

1,389 Listeners