Much of the discussion around AI safety is motivated by concerns around existential risk: the idea that autonomous systems will grow smarter than humans and go on to eradicate our species, either deliberately or as an unintended consequence.
The founders of the AI safety movement took these possibilities seriously when many people still brushed them off as science fiction. Nick Bostrom's 2014 book Superintelligence, for example, explored risks and opportunities humanity might face after developing AI systems with cognitive capabilities drastically more powerful than our own.
His work built on even earlier scholarship from Stephen Omohundro, Stuart Russell, Eliezer Yudkowsky, and others whose foundational ideas were published during an era where the most advanced machine learning algorithms did things like rank search results.
These classical arguments still underlie many of the conversations in AI risk.
As forward-thinking as they were, many important details of these arguments are now behind [...]
---
Outline:
(01:45) The Classic Arguments for Existential AI Risk
(04:37) Flaws in the Classic Arguments
(08:42) New Foundations of AI Existential Risk
---