
Sign up to save your podcasts
Or


In this episode of Possible, Reid Hoffman and Aria Finger explore how AI is colliding with real-world regulation, responsibility, and even civility. As states like Utah, California, and Illinois roll out new AI laws governing everything from chatbot disclosure to bans on AI-driven therapy, Reid breaks down the logic that’s driving these policies. The conversation dives into OpenAI’s self-imposed limits on medical, legal, and financial advice, the challenge of providing access while managing liability, and why safe harbor laws could unlock life-saving potential for AI. From there, the discussion zooms out to the global stage, where China is pushing for an international AI governance body and the U.S. risks losing moral and technical leadership. Finally, Aria and Reid end on a human behavior note: a study showing AI performs better when users are rude. What does that say about how we train these models? But mostly, what does it reveal about us? From transparency to civility, what kind of intelligence do we really want to build?
For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/
By Reid Hoffman4.5
115115 ratings
In this episode of Possible, Reid Hoffman and Aria Finger explore how AI is colliding with real-world regulation, responsibility, and even civility. As states like Utah, California, and Illinois roll out new AI laws governing everything from chatbot disclosure to bans on AI-driven therapy, Reid breaks down the logic that’s driving these policies. The conversation dives into OpenAI’s self-imposed limits on medical, legal, and financial advice, the challenge of providing access while managing liability, and why safe harbor laws could unlock life-saving potential for AI. From there, the discussion zooms out to the global stage, where China is pushing for an international AI governance body and the U.S. risks losing moral and technical leadership. Finally, Aria and Reid end on a human behavior note: a study showing AI performs better when users are rude. What does that say about how we train these models? But mostly, what does it reveal about us? From transparency to civility, what kind of intelligence do we really want to build?
For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/

1,286 Listeners

534 Listeners

1,095 Listeners

613 Listeners

3,993 Listeners

204 Listeners

10,073 Listeners

531 Listeners

505 Listeners

587 Listeners

139 Listeners

35 Listeners

473 Listeners

34 Listeners

41 Listeners