
Sign up to save your podcasts
Or
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.
You can read Steven’s Substack here: https://stevenadler.substack.com/
Thanks to Leo Wu for research assistance!
Hosted on Acast. See acast.com/privacy for more information.
4.5
2222 ratings
Steven Adler, former OpenAI safety researcher, author of Clear-Eyed AI on Substack, and independent AGI-readiness researcher, joins Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law, to assess the current state of AI testing and evaluations. The two walk through Steven’s views on industry efforts to improve model testing and what he thinks regulators ought to know and do when it comes to preventing AI harms.
You can read Steven’s Substack here: https://stevenadler.substack.com/
Thanks to Leo Wu for research assistance!
Hosted on Acast. See acast.com/privacy for more information.
1,106 Listeners
6,277 Listeners
1,947 Listeners
708 Listeners
112,398 Listeners
32,380 Listeners
7,058 Listeners
389 Listeners
5,505 Listeners
16,007 Listeners
195 Listeners
34 Listeners
37 Listeners
510 Listeners
216 Listeners