
Sign up to save your podcasts
Or


Introduction: some contemporary AI governance context
It's a confusing time in AI governance. Several countries’ governments recently changed hands. DeepSeek and other technical developments have called into question certain assumptions about the strategic landscape. Political discourse has swung dramatically away from catastrophic risk and toward framings of innovation and national competitiveness.
Meanwhile, the new governments have issued statements of policy, and AI companies (mostly) continue to publish or update their risk evaluation and mitigation approaches. Interpreting these words and actions has become an important art for AI governance practitioners: does the phrase “human flourishing” in the new executive order signal concern about superintelligence, or just that we should focus on AI's economic and medical potential and not “hand-wring” about safety? How seriously should we take the many references to safety in the UK's AI Opportunities Action Plan, given the unreserved AI optimism in the announcement? Does Meta's emphasis on [...]
---
Outline:
(00:05) Introduction: some contemporary AI governance context
(02:36) Biases towards claiming agreement with one's own beliefs
(05:06) Weaker biases towards claiming disagreement with one's own beliefs
(06:21) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongIntroduction: some contemporary AI governance context
It's a confusing time in AI governance. Several countries’ governments recently changed hands. DeepSeek and other technical developments have called into question certain assumptions about the strategic landscape. Political discourse has swung dramatically away from catastrophic risk and toward framings of innovation and national competitiveness.
Meanwhile, the new governments have issued statements of policy, and AI companies (mostly) continue to publish or update their risk evaluation and mitigation approaches. Interpreting these words and actions has become an important art for AI governance practitioners: does the phrase “human flourishing” in the new executive order signal concern about superintelligence, or just that we should focus on AI's economic and medical potential and not “hand-wring” about safety? How seriously should we take the many references to safety in the UK's AI Opportunities Action Plan, given the unreserved AI optimism in the announcement? Does Meta's emphasis on [...]
---
Outline:
(00:05) Introduction: some contemporary AI governance context
(02:36) Biases towards claiming agreement with one's own beliefs
(05:06) Weaker biases towards claiming disagreement with one's own beliefs
(06:21) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.

26,338 Listeners

2,441 Listeners

9,137 Listeners

4,150 Listeners

92 Listeners

1,597 Listeners

9,897 Listeners

90 Listeners

505 Listeners

5,475 Listeners

16,042 Listeners

540 Listeners

133 Listeners

95 Listeners

516 Listeners