
Sign up to save your podcasts
Or
In my previous post in this series, I estimated that we have 3 researchers for every advocate working on US AI governance, and I argued that this ratio is backwards. When allocating staff, you almost always want to have more people working on the more central activity. I argued that in the case of AI policy, the central activity is advocacy, not research, because the core problem to be solved is fixing the bad private incentives faced by AI developers.
As I explained, the problem with these incentives is less that they’re poorly understood, and more that they require significant political effort to overturn. As a result, we’ll need to shift significant resources from research (which helps us understand problems better) to advocacy (which helps us change bad incentives).
In this post, I want to explain why it's appropriate for us to shift these resources now, rather than [...]
---
Outline:
(02:24) OUR BEST POLICIES OFFER POSITIVE EXPECTED VALUE
(04:41) We've Already Laid a Philosophical Foundation for AI Governance
(11:54) Regulations Won't Backfire by Pushing Companies Overseas
(13:59) Regulation Is Helpful Even If It's Not Fully Future-Proof
(19:22) Regulations Don't Have to Lead to Oligopoly
(23:14) Regulations Won't Significantly Promote Nationalization
(27:32) ADVOCATING FOR POLICIES NOW MAKES THEM MORE LIKELY TO PASS
(27:37) Advocacy Won't Offend Politicians
(30:59) Advocacy Isn't Useless
(33:37) Political Opposition Can't Be Avoided by Delay
(38:42) Improving Advocacy Skills Requires Practice
(43:12) WE CAN INCREASE THE SUPPLY OF ADVOCATES
(43:47) Generously Fund Existing Advocacy Groups
(47:15) Use Headhunters
(49:14) Aggressively Train New Advocates
(51:37) WE DONT HAVE THE LUXURY OF WAITING
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
In my previous post in this series, I estimated that we have 3 researchers for every advocate working on US AI governance, and I argued that this ratio is backwards. When allocating staff, you almost always want to have more people working on the more central activity. I argued that in the case of AI policy, the central activity is advocacy, not research, because the core problem to be solved is fixing the bad private incentives faced by AI developers.
As I explained, the problem with these incentives is less that they’re poorly understood, and more that they require significant political effort to overturn. As a result, we’ll need to shift significant resources from research (which helps us understand problems better) to advocacy (which helps us change bad incentives).
In this post, I want to explain why it's appropriate for us to shift these resources now, rather than [...]
---
Outline:
(02:24) OUR BEST POLICIES OFFER POSITIVE EXPECTED VALUE
(04:41) We've Already Laid a Philosophical Foundation for AI Governance
(11:54) Regulations Won't Backfire by Pushing Companies Overseas
(13:59) Regulation Is Helpful Even If It's Not Fully Future-Proof
(19:22) Regulations Don't Have to Lead to Oligopoly
(23:14) Regulations Won't Significantly Promote Nationalization
(27:32) ADVOCATING FOR POLICIES NOW MAKES THEM MORE LIKELY TO PASS
(27:37) Advocacy Won't Offend Politicians
(30:59) Advocacy Isn't Useless
(33:37) Political Opposition Can't Be Avoided by Delay
(38:42) Improving Advocacy Skills Requires Practice
(43:12) WE CAN INCREASE THE SUPPLY OF ADVOCATES
(43:47) Generously Fund Existing Advocacy Groups
(47:15) Use Headhunters
(49:14) Aggressively Train New Advocates
(51:37) WE DONT HAVE THE LUXURY OF WAITING
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
26,469 Listeners
2,395 Listeners
7,953 Listeners
4,145 Listeners
89 Listeners
1,472 Listeners
9,207 Listeners
88 Listeners
426 Listeners
5,462 Listeners
15,335 Listeners
482 Listeners
121 Listeners
75 Listeners
461 Listeners