
Sign up to save your podcasts
Or
When it comes to AI policy, and AI governance, Washington is arguably sending mixed signals. Overregulation is a concern—but so is underregulation. Stakeholders across the political spectrum and business world have a lot of conflicting thoughts. More export controls on AI chips, or less. More energy production, but what about the climate? Less liability, or more liability. Safety testing, or not? “Prevent catastrophic risks”, or “don’t focus on unlikely doom scenarios.” While Washington looks unlikely to pass comprehensive AI legislation, states have tried, and failed. In a prior episode, we talked about SB 1047, CA’s ill-fated effort. Colorado recently saw its Democratic governor take the unusual step of delaying implementation of a new AI bill in his signing letter, due to concerns it would stifle innovation the state wants to attract.
But are we even asking the right questions? What problem are we trying to solve? Should we be less focused on whether or not AI will make a bioweapon, or more focused on how to make life easier and better for people in a world that looks very different from the one we inhabit today? Is safety versus innovation a distraction, a false binary? Is there a third option, a different way of thinking about how to govern AI? And if today’s governments aren’t fit to regulate AI, is private governance the way forward?
Evan is joined by Andrew Freedman, is the co-founder and Chief Strategy Officer of Fathom, a nonprofit building solutions society needs to thrive in an AI-driven world. Prior to Fathom, Andrew served as Colorado’s first Director of Marijuana Coordination, often referred to as the state’s "Cannabis Czar.” You can read Fathom’s proposal for AI governance here, and former FAI fellow Dean Ball’s writing on the topic here.
5
33 ratings
When it comes to AI policy, and AI governance, Washington is arguably sending mixed signals. Overregulation is a concern—but so is underregulation. Stakeholders across the political spectrum and business world have a lot of conflicting thoughts. More export controls on AI chips, or less. More energy production, but what about the climate? Less liability, or more liability. Safety testing, or not? “Prevent catastrophic risks”, or “don’t focus on unlikely doom scenarios.” While Washington looks unlikely to pass comprehensive AI legislation, states have tried, and failed. In a prior episode, we talked about SB 1047, CA’s ill-fated effort. Colorado recently saw its Democratic governor take the unusual step of delaying implementation of a new AI bill in his signing letter, due to concerns it would stifle innovation the state wants to attract.
But are we even asking the right questions? What problem are we trying to solve? Should we be less focused on whether or not AI will make a bioweapon, or more focused on how to make life easier and better for people in a world that looks very different from the one we inhabit today? Is safety versus innovation a distraction, a false binary? Is there a third option, a different way of thinking about how to govern AI? And if today’s governments aren’t fit to regulate AI, is private governance the way forward?
Evan is joined by Andrew Freedman, is the co-founder and Chief Strategy Officer of Fathom, a nonprofit building solutions society needs to thrive in an AI-driven world. Prior to Fathom, Andrew served as Colorado’s first Director of Marijuana Coordination, often referred to as the state’s "Cannabis Czar.” You can read Fathom’s proposal for AI governance here, and former FAI fellow Dean Ball’s writing on the topic here.
4,768 Listeners
1,223 Listeners
2,432 Listeners
3,157 Listeners
8,685 Listeners
182 Listeners
5,573 Listeners
697 Listeners
664 Listeners
369 Listeners
78 Listeners
65 Listeners
113 Listeners
138 Listeners