
Sign up to save your podcasts
Or


When it comes to AI regulation, states are moving faster than the federal government. While California is the hub of American AI innovation (Google, OpenAI, Anthropic, and Meta are all headquartered in the Valley), the state is also poised to enact some of the strictest state regulations on frontier AI development.
Introduced on February 8, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act (SB 1047) is a sweeping bill that would include a new regulatory division and requirements that companies demonstrate their tech won’t be used for harmful purposes, such as building a bioweapon or aiding terrorism.
SB 1047 has generated intense debate within the AI community and beyond. Proponents argue that robust oversight and safety requirements are essential to mitigate the catastrophic risks posed by advanced AI systems. Opponents contend that the scope is overbroad and that the compliance burdens and legal risks will advantage incumbent players over smaller and open-source developers.
Evan is joined by Brian Chau, Executive Director of Alliance for the Future and Dean Ball, a research fellow at the Mercatus Center and author of the Substack Hyperdimensional. You can read Alliance for the Future’s call to action on SB 1047 here. And you can read Dean’s analysis of the bill here. For a counter argument, check out a piece by AI writer Zvi Mowshowitz here.
By Foundation for American Innovation4.8
1111 ratings
When it comes to AI regulation, states are moving faster than the federal government. While California is the hub of American AI innovation (Google, OpenAI, Anthropic, and Meta are all headquartered in the Valley), the state is also poised to enact some of the strictest state regulations on frontier AI development.
Introduced on February 8, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act (SB 1047) is a sweeping bill that would include a new regulatory division and requirements that companies demonstrate their tech won’t be used for harmful purposes, such as building a bioweapon or aiding terrorism.
SB 1047 has generated intense debate within the AI community and beyond. Proponents argue that robust oversight and safety requirements are essential to mitigate the catastrophic risks posed by advanced AI systems. Opponents contend that the scope is overbroad and that the compliance burdens and legal risks will advantage incumbent players over smaller and open-source developers.
Evan is joined by Brian Chau, Executive Director of Alliance for the Future and Dean Ball, a research fellow at the Mercatus Center and author of the Substack Hyperdimensional. You can read Alliance for the Future’s call to action on SB 1047 here. And you can read Dean’s analysis of the bill here. For a counter argument, check out a piece by AI writer Zvi Mowshowitz here.

2,430 Listeners

1,949 Listeners

289 Listeners

7,036 Listeners

92 Listeners

8,052 Listeners

2,433 Listeners

3,865 Listeners

388 Listeners

489 Listeners

61 Listeners

16,152 Listeners

381 Listeners

235 Listeners

35 Listeners