
Sign up to save your podcasts
Or


The Machine Intelligence Research Institute has recently shifted its focus to "technical governance". But what is that actually, and what are they doing? In this episode, I chat with Peter Barnett about his team's work on studying what evaluations can and cannot do, as well as verifying international agreements on AI development.
Patreon: https://www.patreon.com/axrpodcast
Ko-fi: https://ko-fi.com/axrpodcast
The transcript: https://axrp.net/episode/2024/12/14/episode-38_4-peter-barnett-technical-governance-at-miri.html
FAR.AI: https://far.ai/
FAR.AI on X (aka Twitter): https://x.com/farairesearch
FAR.AI on YouTube: https://www.youtube.com/@FARAIResearch
The Alignment Workshop: https://www.alignment-workshop.com/
Topics we discuss, and timestamps:
01:25 - MIRI's technical governance team
03:43 - Misuse evaluations
06:37 - Misalignment evaluations
11:29 - Verifying international agreements
13:30 - Difficulties in compute monitoring
16:44 - More on MIRI's technical governance team
Links:
MIRI Technical Governance Team: https://techgov.intelligence.org/
What AI evaluations for preventing catastrophic risks can and cannot do: https://techgov.intelligence.org/research/what-ai-evaluations-for-preventing-catastrophic-risks-can-and-cannot-do
Declare and Justify: Explicit assumptions in AI evaluations are necessary for effective regulation: https://arxiv.org/abs/2411.12820
Mechanisms to Verify International Agreements About AI Development: https://techgov.intelligence.org/research/mechanisms-to-verify-international-agreements-about-ai-development
Revisiting algorithmic progress: https://epoch.ai/blog/revisiting-algorithmic-progress
Episode art by Hamish Doodles: hamishdoodles.com
By Daniel Filan4.4
88 ratings
The Machine Intelligence Research Institute has recently shifted its focus to "technical governance". But what is that actually, and what are they doing? In this episode, I chat with Peter Barnett about his team's work on studying what evaluations can and cannot do, as well as verifying international agreements on AI development.
Patreon: https://www.patreon.com/axrpodcast
Ko-fi: https://ko-fi.com/axrpodcast
The transcript: https://axrp.net/episode/2024/12/14/episode-38_4-peter-barnett-technical-governance-at-miri.html
FAR.AI: https://far.ai/
FAR.AI on X (aka Twitter): https://x.com/farairesearch
FAR.AI on YouTube: https://www.youtube.com/@FARAIResearch
The Alignment Workshop: https://www.alignment-workshop.com/
Topics we discuss, and timestamps:
01:25 - MIRI's technical governance team
03:43 - Misuse evaluations
06:37 - Misalignment evaluations
11:29 - Verifying international agreements
13:30 - Difficulties in compute monitoring
16:44 - More on MIRI's technical governance team
Links:
MIRI Technical Governance Team: https://techgov.intelligence.org/
What AI evaluations for preventing catastrophic risks can and cannot do: https://techgov.intelligence.org/research/what-ai-evaluations-for-preventing-catastrophic-risks-can-and-cannot-do
Declare and Justify: Explicit assumptions in AI evaluations are necessary for effective regulation: https://arxiv.org/abs/2411.12820
Mechanisms to Verify International Agreements About AI Development: https://techgov.intelligence.org/research/mechanisms-to-verify-international-agreements-about-ai-development
Revisiting algorithmic progress: https://epoch.ai/blog/revisiting-algorithmic-progress
Episode art by Hamish Doodles: hamishdoodles.com

26,371 Listeners

2,426 Listeners

1,083 Listeners

107 Listeners

112,356 Listeners

210 Listeners

9,793 Listeners

89 Listeners

489 Listeners

5,473 Listeners

132 Listeners

16,106 Listeners

97 Listeners

209 Listeners

133 Listeners