Very imperfect transcript: bit.ly/3QhFgEJ
Summary from Clong:
- The discussion centers around the concept of a unitary general intelligence or cognitive ability. Whether this exists as a real and distinct thing.
- Nathan argues against it, citing evidence from cognitive science about highly specialized and localized brain functions that can be damaged independently. Losing linguistic ability does not harm spatial reasoning ability.
- He also cites evidence from AI, like systems excelling at specific tasks without general competency, and tasks easy for AI but hard for humans. This suggests human cognition isn’t defined by some unitary general ability.
- Aaron is more open to the idea, appealing to an intuitive sense of a qualitative difference between human and animal cognition - using symbolic reasoning in new domains. But he acknowledges the concept is fuzzy.
- They discuss whether language necessitates this general ability in humans, or is just associated. Nathan leans toward specialized language modules in the brain.
- They debate whether strong future AI systems could learn complex motor skills just from textual descriptions, without analogous motor control data. Nathan is highly skeptical.
- Aaron makes an analogy to the universe arising from simple physical laws. Nathan finds this irrelevant to the debate.
- Overall, Nathan seems to push Aaron towards a more skeptical view of a unitary general cognitive ability as a scientifically coherent concept. But Aaron retains some sympathy for related intuitions about human vs animal cognition.
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe