
Sign up to save your podcasts
Or


From products like ChatGPT to resource allocation and cancer diagnoses, artificial intelligence will impact nearly every part of our lives. We know the potential benefits of AI are enormous, but so are the risks, including chemical and bioweapons attacks, more effective disinformation campaigns, AI-enabled cyber-attacks, and lethal autonomous weapons systems.
Policymakers have taken steps to address these risks, but industry and civil society leaders are warning that these efforts still fall short.
Last year saw a flurry of efforts to regulate AI. In October, the Biden administration issued an executive order to encourage “responsible” AI development, in November, the U.K. hosted the world’s first global AI Safety Summit to explore how best to mitigate some of the greatest risks facing humanity, and in December European Union policymakers passed a deal imposing new transparency requirements on AI systems.
Are efforts to regulate AI working? What else needs to be done? That’s the focus of our show today.
It’s clear we are at an inflection point in AI governance – where innovation is outpacing regulation. But while States face a common problem in regulating AI, approaches differ and prospects for global cooperation appear limited.
There is no better expert to navigate this terrain than Robert Trager, Senior Research Fellow at Oxford University’s Blavatnik School of Government, Co-Director of the Oxford Martin AI Governance Initiative, and International Governance Lead at the Centre for the Governance of AI.
Show Notes:
By Just Security5
197197 ratings
From products like ChatGPT to resource allocation and cancer diagnoses, artificial intelligence will impact nearly every part of our lives. We know the potential benefits of AI are enormous, but so are the risks, including chemical and bioweapons attacks, more effective disinformation campaigns, AI-enabled cyber-attacks, and lethal autonomous weapons systems.
Policymakers have taken steps to address these risks, but industry and civil society leaders are warning that these efforts still fall short.
Last year saw a flurry of efforts to regulate AI. In October, the Biden administration issued an executive order to encourage “responsible” AI development, in November, the U.K. hosted the world’s first global AI Safety Summit to explore how best to mitigate some of the greatest risks facing humanity, and in December European Union policymakers passed a deal imposing new transparency requirements on AI systems.
Are efforts to regulate AI working? What else needs to be done? That’s the focus of our show today.
It’s clear we are at an inflection point in AI governance – where innovation is outpacing regulation. But while States face a common problem in regulating AI, approaches differ and prospects for global cooperation appear limited.
There is no better expert to navigate this terrain than Robert Trager, Senior Research Fellow at Oxford University’s Blavatnik School of Government, Co-Director of the Oxford Martin AI Governance Initiative, and International Governance Lead at the Centre for the Governance of AI.
Show Notes:

3,530 Listeners

1,944 Listeners

6,301 Listeners

1,778 Listeners

32,355 Listeners

7,634 Listeners

2,862 Listeners

1,066 Listeners

12,519 Listeners

4,638 Listeners

5,806 Listeners

10,500 Listeners

494 Listeners

7,080 Listeners

3,528 Listeners