
Sign up to save your podcasts
Or


On March 22nd, the Future of Life Institute, a nonprofit organization focussed on reducing existential risks facing humanity, and in particular existential risk from advanced artificial intelligence (AI), published an open letter entitled Pause Giant AI Experiments. Its signatories included tech luminaries such as Elon Musk, and Apple co-founder Steve Wozniak. Its opening sentences read:
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs… Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”
But given the kind of AI available today, are these kinds of concern justified? Is Chat GPT, for example, really a kind of intelligence? And if so, are governments capable of taming it and channelling its capabilities for the benefit of humanity, rather than its destruction?
John Naughton is a Senior Research Fellow at Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), University of Cambridge and Emeritus Professor of the Public Understanding of Technology at the Open University. He is also the technology columnist of the Observer newspaper.
Pease leave us a rating and a review on Apple Podcasts.
This podcast is created in partnership with The Philosopher, the UK’s longest running public philosophy journal. Check out the spring issue of the philosopher, and its spring online lecture series: https://www.thephilosopher1923.org
Artwork by Nick Halliday
Music by Rowan Mcilvride
By Alexis Papazoglou4.9
1919 ratings
On March 22nd, the Future of Life Institute, a nonprofit organization focussed on reducing existential risks facing humanity, and in particular existential risk from advanced artificial intelligence (AI), published an open letter entitled Pause Giant AI Experiments. Its signatories included tech luminaries such as Elon Musk, and Apple co-founder Steve Wozniak. Its opening sentences read:
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs… Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”
But given the kind of AI available today, are these kinds of concern justified? Is Chat GPT, for example, really a kind of intelligence? And if so, are governments capable of taming it and channelling its capabilities for the benefit of humanity, rather than its destruction?
John Naughton is a Senior Research Fellow at Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), University of Cambridge and Emeritus Professor of the Public Understanding of Technology at the Open University. He is also the technology columnist of the Observer newspaper.
Pease leave us a rating and a review on Apple Podcasts.
This podcast is created in partnership with The Philosopher, the UK’s longest running public philosophy journal. Check out the spring issue of the philosopher, and its spring online lecture series: https://www.thephilosopher1923.org
Artwork by Nick Halliday
Music by Rowan Mcilvride

38,476 Listeners

15,216 Listeners

289 Listeners

880 Listeners

1,540 Listeners

316 Listeners

112,758 Listeners

5,183 Listeners

162 Listeners

16,042 Listeners

3,074 Listeners

1,066 Listeners

820 Listeners

323 Listeners

2,214 Listeners