
Sign up to save your podcasts
Or
Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind.
For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
4.9
182182 ratings
Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind.
For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
475 Listeners
312 Listeners
196 Listeners
92 Listeners
320 Listeners
251 Listeners
99 Listeners
106 Listeners
139 Listeners
178 Listeners
70 Listeners
397 Listeners
226 Listeners
28 Listeners
31 Listeners