
Sign up to save your podcasts
Or


(Overtime segment available to paid subscribers below the paywall.)
0:00 Sam’s Bloggingheads bona fides 2:39 Is AI “interpretability” worth the trouble? 13:38 Natural selection and artificial intelligence 19:47 Are large language models absorbing human nature? 29:53 What Sam thinks singularitarian AI doomers get wrong 36:36 Is it time for an AI-safety Manhattan Project? 51:12 Sam: Let a thousand GPT-4 (but not GPT-5) plug-ins bloom. 57:09 Where are our AI safety blindspots?
Robert Wright (Bloggingheads.tv, The Evolution of God, Nonzero, Why Buddhism Is True) and Samuel Hammond (Foundation for American Innovation, Second Best). Recorded May 30, 2023.
Comments on BhTV: http://bloggingheads.tv/videos/66231 Twitter: https://twitter.com/NonzeroPods
By Nonzero4.6
584584 ratings
(Overtime segment available to paid subscribers below the paywall.)
0:00 Sam’s Bloggingheads bona fides 2:39 Is AI “interpretability” worth the trouble? 13:38 Natural selection and artificial intelligence 19:47 Are large language models absorbing human nature? 29:53 What Sam thinks singularitarian AI doomers get wrong 36:36 Is it time for an AI-safety Manhattan Project? 51:12 Sam: Let a thousand GPT-4 (but not GPT-5) plug-ins bloom. 57:09 Where are our AI safety blindspots?
Robert Wright (Bloggingheads.tv, The Evolution of God, Nonzero, Why Buddhism Is True) and Samuel Hammond (Foundation for American Innovation, Second Best). Recorded May 30, 2023.
Comments on BhTV: http://bloggingheads.tv/videos/66231 Twitter: https://twitter.com/NonzeroPods

2,668 Listeners

26,388 Listeners

2,424 Listeners

294 Listeners

2,286 Listeners

63 Listeners

69 Listeners

10 Listeners

2,888 Listeners

904 Listeners

923 Listeners

7,055 Listeners

2,040 Listeners

3,816 Listeners

955 Listeners

818 Listeners

16,083 Listeners

1,050 Listeners

330 Listeners