
Sign up to save your podcasts
Or
(Overtime segment available to paid subscribers below the paywall.)
0:00 Sam’s Bloggingheads bona fides 2:39 Is AI “interpretability” worth the trouble? 13:38 Natural selection and artificial intelligence 19:47 Are large language models absorbing human nature? 29:53 What Sam thinks singularitarian AI doomers get wrong 36:36 Is it time for an AI-safety Manhattan Project? 51:12 Sam: Let a thousand GPT-4 (but not GPT-5) plug-ins bloom. 57:09 Where are our AI safety blindspots?
Robert Wright (Bloggingheads.tv, The Evolution of God, Nonzero, Why Buddhism Is True) and Samuel Hammond (Foundation for American Innovation, Second Best). Recorded May 30, 2023.
Comments on BhTV: http://bloggingheads.tv/videos/66231 Twitter: https://twitter.com/NonzeroPods
4.6
572572 ratings
(Overtime segment available to paid subscribers below the paywall.)
0:00 Sam’s Bloggingheads bona fides 2:39 Is AI “interpretability” worth the trouble? 13:38 Natural selection and artificial intelligence 19:47 Are large language models absorbing human nature? 29:53 What Sam thinks singularitarian AI doomers get wrong 36:36 Is it time for an AI-safety Manhattan Project? 51:12 Sam: Let a thousand GPT-4 (but not GPT-5) plug-ins bloom. 57:09 Where are our AI safety blindspots?
Robert Wright (Bloggingheads.tv, The Evolution of God, Nonzero, Why Buddhism Is True) and Samuel Hammond (Foundation for American Innovation, Second Best). Recorded May 30, 2023.
Comments on BhTV: http://bloggingheads.tv/videos/66231 Twitter: https://twitter.com/NonzeroPods
289 Listeners
2,081 Listeners
2,246 Listeners
63 Listeners
2,640 Listeners
26,408 Listeners
68 Listeners
10 Listeners
2,383 Listeners
835 Listeners
2,809 Listeners
892 Listeners
901 Listeners
4,102 Listeners
802 Listeners
924 Listeners
811 Listeners
198 Listeners
196 Listeners