
Sign up to save your podcasts
Or


Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind.
For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".
Thanks to everyone who made this possible, including but not limited to:
Please leave us a review on Spotify or Apple Podcasts if you enjoyed this episode. We always want to hear from our audience whether that's in the form of feedback, new idea or a guest recommendation!
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
By Hannah Fry4.8
191191 ratings
Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind.
For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".
Thanks to everyone who made this possible, including but not limited to:
Please leave us a review on Spotify or Apple Podcasts if you enjoyed this episode. We always want to hear from our audience whether that's in the form of feedback, new idea or a guest recommendation!
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

1,097 Listeners

172 Listeners

303 Listeners

613 Listeners

347 Listeners

205 Listeners

305 Listeners

97 Listeners

523 Listeners

263 Listeners

5,522 Listeners

133 Listeners

93 Listeners

636 Listeners

52 Listeners