
Sign up to save your podcasts
Or


(Overtime segment available to paid subscribers below the paywall.)
0:00 Sam’s Bloggingheads bona fides 2:39 Is AI “interpretability” worth the trouble? 13:38 Natural selection and artificial intelligence 19:47 Are large language models absorbing human nature? 29:53 What Sam thinks singularitarian AI doomers get wrong 36:36 Is it time for an AI-safety Manhattan Project? 51:12 Sam: Let a thousand GPT-4 (but not GPT-5) plug-ins bloom. 57:09 Where are our AI safety blindspots?
Robert Wright (Bloggingheads.tv, The Evolution of God, Nonzero, Why Buddhism Is True) and Samuel Hammond (Foundation for American Innovation, Second Best). Recorded May 30, 2023.
Comments on BhTV: http://bloggingheads.tv/videos/66231 Twitter: https://twitter.com/NonzeroPods
By Nonzero4.6
584584 ratings
(Overtime segment available to paid subscribers below the paywall.)
0:00 Sam’s Bloggingheads bona fides 2:39 Is AI “interpretability” worth the trouble? 13:38 Natural selection and artificial intelligence 19:47 Are large language models absorbing human nature? 29:53 What Sam thinks singularitarian AI doomers get wrong 36:36 Is it time for an AI-safety Manhattan Project? 51:12 Sam: Let a thousand GPT-4 (but not GPT-5) plug-ins bloom. 57:09 Where are our AI safety blindspots?
Robert Wright (Bloggingheads.tv, The Evolution of God, Nonzero, Why Buddhism Is True) and Samuel Hammond (Foundation for American Innovation, Second Best). Recorded May 30, 2023.
Comments on BhTV: http://bloggingheads.tv/videos/66231 Twitter: https://twitter.com/NonzeroPods

2,676 Listeners

26,319 Listeners

2,452 Listeners

292 Listeners

2,272 Listeners

63 Listeners

69 Listeners

10 Listeners

2,880 Listeners

908 Listeners

937 Listeners

740 Listeners

569 Listeners

3,825 Listeners

800 Listeners

956 Listeners

824 Listeners

209 Listeners

219 Listeners