
Sign up to save your podcasts
Or


(Overtime segment available to paid subscribers below the paywall.)
0:00 Sam’s Bloggingheads bona fides 2:39 Is AI “interpretability” worth the trouble? 13:38 Natural selection and artificial intelligence 19:47 Are large language models absorbing human nature? 29:53 What Sam thinks singularitarian AI doomers get wrong 36:36 Is it time for an AI-safety Manhattan Project? 51:12 Sam: Let a thousand GPT-4 (but not GPT-5) plug-ins bloom. 57:09 Where are our AI safety blindspots?
Robert Wright (Bloggingheads.tv, The Evolution of God, Nonzero, Why Buddhism Is True) and Samuel Hammond (Foundation for American Innovation, Second Best). Recorded May 30, 2023.
Comments on BhTV: http://bloggingheads.tv/videos/66231 Twitter: https://twitter.com/NonzeroPods
By Nonzero4.6
588588 ratings
(Overtime segment available to paid subscribers below the paywall.)
0:00 Sam’s Bloggingheads bona fides 2:39 Is AI “interpretability” worth the trouble? 13:38 Natural selection and artificial intelligence 19:47 Are large language models absorbing human nature? 29:53 What Sam thinks singularitarian AI doomers get wrong 36:36 Is it time for an AI-safety Manhattan Project? 51:12 Sam: Let a thousand GPT-4 (but not GPT-5) plug-ins bloom. 57:09 Where are our AI safety blindspots?
Robert Wright (Bloggingheads.tv, The Evolution of God, Nonzero, Why Buddhism Is True) and Samuel Hammond (Foundation for American Innovation, Second Best). Recorded May 30, 2023.
Comments on BhTV: http://bloggingheads.tv/videos/66231 Twitter: https://twitter.com/NonzeroPods

2,680 Listeners

26,380 Listeners

4,270 Listeners

2,461 Listeners

292 Listeners

2,267 Listeners

63 Listeners

69 Listeners

10 Listeners

845 Listeners

907 Listeners

941 Listeners

796 Listeners

3,833 Listeners

805 Listeners

954 Listeners

818 Listeners

221 Listeners

1,081 Listeners