
Sign up to save your podcasts
Or


When it comes to AI regulation, states are moving faster than the federal government. While California is the hub of American AI innovation (Google, OpenAI, Anthropic, and Meta are all headquartered in the Valley), the state is also poised to enact some of the strictest state regulations on frontier AI development.
Introduced on February 8, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act (SB 1047) is a sweeping bill that would include a new regulatory division and requirements that companies demonstrate their tech won’t be used for harmful purposes, such as building a bioweapon or aiding terrorism.
SB 1047 has generated intense debate within the AI community and beyond. Proponents argue that robust oversight and safety requirements are essential to mitigate the catastrophic risks posed by advanced AI systems. Opponents contend that the scope is overbroad and that the compliance burdens and legal risks will advantage incumbent players over smaller and open-source developers.
Evan is joined by Brian Chau, Executive Director of Alliance for the Future and Dean Ball, a research fellow at the Mercatus Center and author of the Substack Hyperdimensional. You can read Alliance for the Future’s call to action on SB 1047 here. And you can read Dean’s analysis of the bill here. For a counter argument, check out a piece by AI writer Zvi Mowshowitz here.
By Foundation for American Innovation4.8
1111 ratings
When it comes to AI regulation, states are moving faster than the federal government. While California is the hub of American AI innovation (Google, OpenAI, Anthropic, and Meta are all headquartered in the Valley), the state is also poised to enact some of the strictest state regulations on frontier AI development.
Introduced on February 8, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act (SB 1047) is a sweeping bill that would include a new regulatory division and requirements that companies demonstrate their tech won’t be used for harmful purposes, such as building a bioweapon or aiding terrorism.
SB 1047 has generated intense debate within the AI community and beyond. Proponents argue that robust oversight and safety requirements are essential to mitigate the catastrophic risks posed by advanced AI systems. Opponents contend that the scope is overbroad and that the compliance burdens and legal risks will advantage incumbent players over smaller and open-source developers.
Evan is joined by Brian Chau, Executive Director of Alliance for the Future and Dean Ball, a research fellow at the Mercatus Center and author of the Substack Hyperdimensional. You can read Alliance for the Future’s call to action on SB 1047 here. And you can read Dean’s analysis of the bill here. For a counter argument, check out a piece by AI writer Zvi Mowshowitz here.

2,452 Listeners

1,942 Listeners

288 Listeners

7,235 Listeners

95 Listeners

8,044 Listeners

2,436 Listeners

3,904 Listeners

389 Listeners

517 Listeners

64 Listeners

15,892 Listeners

400 Listeners

237 Listeners

35 Listeners