
Sign up to save your podcasts
Or


From products like ChatGPT to resource allocation and cancer diagnoses, artificial intelligence will impact nearly every part of our lives. We know the potential benefits of AI are enormous, but so are the risks, including chemical and bioweapons attacks, more effective disinformation campaigns, AI-enabled cyber-attacks, and lethal autonomous weapons systems.
Policymakers have taken steps to address these risks, but industry and civil society leaders are warning that these efforts still fall short.
Last year saw a flurry of efforts to regulate AI. In October, the Biden administration issued an executive order to encourage “responsible” AI development, in November, the U.K. hosted the world’s first global AI Safety Summit to explore how best to mitigate some of the greatest risks facing humanity, and in December European Union policymakers passed a deal imposing new transparency requirements on AI systems.
Are efforts to regulate AI working? What else needs to be done? That’s the focus of our show today.
It’s clear we are at an inflection point in AI governance – where innovation is outpacing regulation. But while States face a common problem in regulating AI, approaches differ and prospects for global cooperation appear limited.
There is no better expert to navigate this terrain than Robert Trager, Senior Research Fellow at Oxford University’s Blavatnik School of Government, Co-Director of the Oxford Martin AI Governance Initiative, and International Governance Lead at the Centre for the Governance of AI.
Show Notes:
By Just Security5
197197 ratings
From products like ChatGPT to resource allocation and cancer diagnoses, artificial intelligence will impact nearly every part of our lives. We know the potential benefits of AI are enormous, but so are the risks, including chemical and bioweapons attacks, more effective disinformation campaigns, AI-enabled cyber-attacks, and lethal autonomous weapons systems.
Policymakers have taken steps to address these risks, but industry and civil society leaders are warning that these efforts still fall short.
Last year saw a flurry of efforts to regulate AI. In October, the Biden administration issued an executive order to encourage “responsible” AI development, in November, the U.K. hosted the world’s first global AI Safety Summit to explore how best to mitigate some of the greatest risks facing humanity, and in December European Union policymakers passed a deal imposing new transparency requirements on AI systems.
Are efforts to regulate AI working? What else needs to be done? That’s the focus of our show today.
It’s clear we are at an inflection point in AI governance – where innovation is outpacing regulation. But while States face a common problem in regulating AI, approaches differ and prospects for global cooperation appear limited.
There is no better expert to navigate this terrain than Robert Trager, Senior Research Fellow at Oxford University’s Blavatnik School of Government, Co-Director of the Oxford Martin AI Governance Initiative, and International Governance Lead at the Centre for the Governance of AI.
Show Notes:

3,529 Listeners

1,942 Listeners

6,302 Listeners

1,786 Listeners

32,334 Listeners

7,630 Listeners

2,861 Listeners

1,065 Listeners

12,458 Listeners

4,639 Listeners

5,814 Listeners

10,520 Listeners

497 Listeners

7,105 Listeners

3,523 Listeners