
Sign up to save your podcasts
Or
On March 22nd, the Future of Life Institute, a nonprofit organization focussed on reducing existential risks facing humanity, and in particular existential risk from advanced artificial intelligence (AI), published an open letter entitled Pause Giant AI Experiments. Its signatories included tech luminaries such as Elon Musk, and Apple co-founder Steve Wozniak. Its opening sentences read:
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs… Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”
But given the kind of AI available today, are these kinds of concern justified? Is Chat GPT, for example, really a kind of intelligence? And if so, are governments capable of taming it and channelling its capabilities for the benefit of humanity, rather than its destruction?
John Naughton is a Senior Research Fellow at Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), University of Cambridge and Emeritus Professor of the Public Understanding of Technology at the Open University. He is also the technology columnist of the Observer newspaper.
Pease leave us a rating and a review on Apple Podcasts.
This podcast is created in partnership with The Philosopher, the UK’s longest running public philosophy journal. Check out the spring issue of the philosopher, and its spring online lecture series: https://www.thephilosopher1923.org
Artwork by Nick Halliday
Music by Rowan Mcilvride
4.9
1919 ratings
On March 22nd, the Future of Life Institute, a nonprofit organization focussed on reducing existential risks facing humanity, and in particular existential risk from advanced artificial intelligence (AI), published an open letter entitled Pause Giant AI Experiments. Its signatories included tech luminaries such as Elon Musk, and Apple co-founder Steve Wozniak. Its opening sentences read:
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs… Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”
But given the kind of AI available today, are these kinds of concern justified? Is Chat GPT, for example, really a kind of intelligence? And if so, are governments capable of taming it and channelling its capabilities for the benefit of humanity, rather than its destruction?
John Naughton is a Senior Research Fellow at Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), University of Cambridge and Emeritus Professor of the Public Understanding of Technology at the Open University. He is also the technology columnist of the Observer newspaper.
Pease leave us a rating and a review on Apple Podcasts.
This podcast is created in partnership with The Philosopher, the UK’s longest running public philosophy journal. Check out the spring issue of the philosopher, and its spring online lecture series: https://www.thephilosopher1923.org
Artwork by Nick Halliday
Music by Rowan Mcilvride
5,389 Listeners
399 Listeners
1,531 Listeners
308 Listeners
2,093 Listeners
292 Listeners
15,081 Listeners
128 Listeners
25,853 Listeners
10,694 Listeners
304 Listeners
2,304 Listeners
565 Listeners
261 Listeners
321 Listeners