
Sign up to save your podcasts
Or


On March 22nd, the Future of Life Institute, a nonprofit organization focussed on reducing existential risks facing humanity, and in particular existential risk from advanced artificial intelligence (AI), published an open letter entitled Pause Giant AI Experiments. Its signatories included tech luminaries such as Elon Musk, and Apple co-founder Steve Wozniak. Its opening sentences read:
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs… Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”
But given the kind of AI available today, are these kinds of concern justified? Is Chat GPT, for example, really a kind of intelligence? And if so, are governments capable of taming it and channelling its capabilities for the benefit of humanity, rather than its destruction?
John Naughton is a Senior Research Fellow at Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), University of Cambridge and Emeritus Professor of the Public Understanding of Technology at the Open University. He is also the technology columnist of the Observer newspaper.
Pease leave us a rating and a review on Apple Podcasts.
This podcast is created in partnership with The Philosopher, the UK’s longest running public philosophy journal. Check out the spring issue of the philosopher, and its spring online lecture series: https://www.thephilosopher1923.org
Artwork by Nick Halliday
Music by Rowan Mcilvride
By Alexis Papazoglou4.9
1919 ratings
On March 22nd, the Future of Life Institute, a nonprofit organization focussed on reducing existential risks facing humanity, and in particular existential risk from advanced artificial intelligence (AI), published an open letter entitled Pause Giant AI Experiments. Its signatories included tech luminaries such as Elon Musk, and Apple co-founder Steve Wozniak. Its opening sentences read:
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs… Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”
But given the kind of AI available today, are these kinds of concern justified? Is Chat GPT, for example, really a kind of intelligence? And if so, are governments capable of taming it and channelling its capabilities for the benefit of humanity, rather than its destruction?
John Naughton is a Senior Research Fellow at Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), University of Cambridge and Emeritus Professor of the Public Understanding of Technology at the Open University. He is also the technology columnist of the Observer newspaper.
Pease leave us a rating and a review on Apple Podcasts.
This podcast is created in partnership with The Philosopher, the UK’s longest running public philosophy journal. Check out the spring issue of the philosopher, and its spring online lecture series: https://www.thephilosopher1923.org
Artwork by Nick Halliday
Music by Rowan Mcilvride

38,544 Listeners

15,257 Listeners

308 Listeners

892 Listeners

1,542 Listeners

322 Listeners

112,936 Listeners

5,127 Listeners

170 Listeners

16,372 Listeners

3,546 Listeners

1,162 Listeners

801 Listeners

350 Listeners

2,360 Listeners