
Sign up to save your podcasts
Or


Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.
You can read Steven’s Substack here: https://stevenadler.substack.com/
Thanks to Leo Wu for research assistance!
Hosted on Acast. See acast.com/privacy for more information.
By Lawfare & University of Texas Law School4.6
2323 ratings
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.
You can read Steven’s Substack here: https://stevenadler.substack.com/
Thanks to Leo Wu for research assistance!
Hosted on Acast. See acast.com/privacy for more information.

3,527 Listeners

3,155 Listeners

554 Listeners

502 Listeners

1,942 Listeners

6,303 Listeners

7,225 Listeners

94 Listeners

5,542 Listeners

388 Listeners

502 Listeners

5,520 Listeners

15,948 Listeners

2,234 Listeners

607 Listeners