
Sign up to save your podcasts
Or
This week on No Priors, Sarah is joined by Dan Hendrycks, director of the Center of AI Safety. Dan serves as an advisor to xAI and Scale AI. He is a longtime AI researcher, publisher of interesting AI evals such as "Humanity's Last Exam," and co-author of a new paper on National Security "Superintelligence Strategy" along with Scale founder-CEO Alex Wang and former Google CEO Eric Schmidt. They explore AI safety, geopolitical implications, the potential weaponization of AI, along with policy recommendations.
Sign up for new podcasts every week. Email feedback to [email protected]
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @DanHendrycks
Show Notes:
0:00 Introduction
0:36 Dan’s path to focusing on AI Safety
1:25 Safety efforts in large labs
3:12 Distinguishing alignment and safety
4:48 AI’s impact on national security
9:59 How might AI be weaponized?
14:43 Immigration policies for AI talent
17:50 Mutually assured AI malfunction
22:54 Policy suggestions for current administration
25:34 Compute security
30:37 Current state of evals
4.4
114114 ratings
This week on No Priors, Sarah is joined by Dan Hendrycks, director of the Center of AI Safety. Dan serves as an advisor to xAI and Scale AI. He is a longtime AI researcher, publisher of interesting AI evals such as "Humanity's Last Exam," and co-author of a new paper on National Security "Superintelligence Strategy" along with Scale founder-CEO Alex Wang and former Google CEO Eric Schmidt. They explore AI safety, geopolitical implications, the potential weaponization of AI, along with policy recommendations.
Sign up for new podcasts every week. Email feedback to [email protected]
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @DanHendrycks
Show Notes:
0:00 Introduction
0:36 Dan’s path to focusing on AI Safety
1:25 Safety efforts in large labs
3:12 Distinguishing alignment and safety
4:48 AI’s impact on national security
9:59 How might AI be weaponized?
14:43 Immigration policies for AI talent
17:50 Mutually assured AI malfunction
22:54 Policy suggestions for current administration
25:34 Compute security
30:37 Current state of evals
1,273 Listeners
1,040 Listeners
519 Listeners
441 Listeners
192 Listeners
88 Listeners
426 Listeners
50 Listeners
75 Listeners
135 Listeners
461 Listeners
31 Listeners
22 Listeners
43 Listeners
35 Listeners