
Sign up to save your podcasts
Or
Introduction: some contemporary AI governance context
It's a confusing time in AI governance. Several countries’ governments recently changed hands. DeepSeek and other technical developments have called into question certain assumptions about the strategic landscape. Political discourse has swung dramatically away from catastrophic risk and toward framings of innovation and national competitiveness.
Meanwhile, the new governments have issued statements of policy, and AI companies (mostly) continue to publish or update their risk evaluation and mitigation approaches. Interpreting these words and actions has become an important art for AI governance practitioners: does the phrase “human flourishing” in the new executive order signal concern about superintelligence, or just that we should focus on AI's economic and medical potential and not “hand-wring” about safety? How seriously should we take the many references to safety in the UK's AI Opportunities Action Plan, given the unreserved AI optimism in the announcement? Does Meta's emphasis on [...]
---
Outline:
(00:05) Introduction: some contemporary AI governance context
(02:36) Biases towards claiming agreement with one's own beliefs
(05:06) Weaker biases towards claiming disagreement with one's own beliefs
(06:21) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
Introduction: some contemporary AI governance context
It's a confusing time in AI governance. Several countries’ governments recently changed hands. DeepSeek and other technical developments have called into question certain assumptions about the strategic landscape. Political discourse has swung dramatically away from catastrophic risk and toward framings of innovation and national competitiveness.
Meanwhile, the new governments have issued statements of policy, and AI companies (mostly) continue to publish or update their risk evaluation and mitigation approaches. Interpreting these words and actions has become an important art for AI governance practitioners: does the phrase “human flourishing” in the new executive order signal concern about superintelligence, or just that we should focus on AI's economic and medical potential and not “hand-wring” about safety? How seriously should we take the many references to safety in the UK's AI Opportunities Action Plan, given the unreserved AI optimism in the announcement? Does Meta's emphasis on [...]
---
Outline:
(00:05) Introduction: some contemporary AI governance context
(02:36) Biases towards claiming agreement with one's own beliefs
(05:06) Weaker biases towards claiming disagreement with one's own beliefs
(06:21) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,334 Listeners
2,399 Listeners
7,859 Listeners
4,107 Listeners
87 Listeners
1,453 Listeners
8,761 Listeners
90 Listeners
353 Listeners
5,356 Listeners
15,023 Listeners
464 Listeners
128 Listeners
73 Listeners
433 Listeners