
Sign up to save your podcasts
Or


In this episode of Possible, Reid Hoffman and Aria Finger explore how AI is colliding with real-world regulation, responsibility, and even civility. As states like Utah, California, and Illinois roll out new AI laws governing everything from chatbot disclosure to bans on AI-driven therapy, Reid breaks down the logic that’s driving these policies. The conversation dives into OpenAI’s self-imposed limits on medical, legal, and financial advice, the challenge of providing access while managing liability, and why safe harbor laws could unlock life-saving potential for AI. From there, the discussion zooms out to the global stage, where China is pushing for an international AI governance body and the U.S. risks losing moral and technical leadership. Finally, Aria and Reid end on a human behavior note: a study showing AI performs better when users are rude. What does that say about how we train these models? But mostly, what does it reveal about us? From transparency to civility, what kind of intelligence do we really want to build?
For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/
By Reid Hoffman4.5
115115 ratings
In this episode of Possible, Reid Hoffman and Aria Finger explore how AI is colliding with real-world regulation, responsibility, and even civility. As states like Utah, California, and Illinois roll out new AI laws governing everything from chatbot disclosure to bans on AI-driven therapy, Reid breaks down the logic that’s driving these policies. The conversation dives into OpenAI’s self-imposed limits on medical, legal, and financial advice, the challenge of providing access while managing liability, and why safe harbor laws could unlock life-saving potential for AI. From there, the discussion zooms out to the global stage, where China is pushing for an international AI governance body and the U.S. risks losing moral and technical leadership. Finally, Aria and Reid end on a human behavior note: a study showing AI performs better when users are rude. What does that say about how we train these models? But mostly, what does it reveal about us? From transparency to civility, what kind of intelligence do we really want to build?
For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/

1,290 Listeners

535 Listeners

1,100 Listeners

608 Listeners

3,991 Listeners

198 Listeners

9,902 Listeners

506 Listeners

477 Listeners

540 Listeners

131 Listeners

40 Listeners

521 Listeners

35 Listeners

39 Listeners