
Sign up to save your podcasts
Or
When, exactly, should we consider humanity to have properly "lost the game", with respect to agentic AI systems?
The most common AI milestone concepts seem to be "artificial general intelligence", followed closely by "superintelligence". Sometimes people talk about "transformative AI", "high-level machine intelligence", or "full automation of the labor force." None of these are well-suited for pointing specifically at the capabilities that would spell a "point of no return" for humanity. In fact, they're all designed to be agnostic to exactly which capabilities will matter.
When working to predict and mitigate existential risks from AI agents, we should try to be as clear as possible about which capabilities we're concerned about. As a result, I think we should focus on "strategically superhuman AI agents": AI agents that are better than the best groups of humans at real-world strategic action.
Skill at real-world strategic action is context-dependent, and isn't a [...]
---
Outline:
(02:38) Low-effort FAQ
(02:42) Whats the point here? Does anything interesting follow from this?
(03:51) Isnt this just as vague as other milestones?
(04:07) Wont this happen as soon as we get \[AGI, recursive self-improvement, ...\]?
(05:08) Are you just trying to say powerful AI? Thats too obvious to even mention.
---
First published:
Source:
Narrated by TYPE III AUDIO.
When, exactly, should we consider humanity to have properly "lost the game", with respect to agentic AI systems?
The most common AI milestone concepts seem to be "artificial general intelligence", followed closely by "superintelligence". Sometimes people talk about "transformative AI", "high-level machine intelligence", or "full automation of the labor force." None of these are well-suited for pointing specifically at the capabilities that would spell a "point of no return" for humanity. In fact, they're all designed to be agnostic to exactly which capabilities will matter.
When working to predict and mitigate existential risks from AI agents, we should try to be as clear as possible about which capabilities we're concerned about. As a result, I think we should focus on "strategically superhuman AI agents": AI agents that are better than the best groups of humans at real-world strategic action.
Skill at real-world strategic action is context-dependent, and isn't a [...]
---
Outline:
(02:38) Low-effort FAQ
(02:42) Whats the point here? Does anything interesting follow from this?
(03:51) Isnt this just as vague as other milestones?
(04:07) Wont this happen as soon as we get \[AGI, recursive self-improvement, ...\]?
(05:08) Are you just trying to say powerful AI? Thats too obvious to even mention.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,334 Listeners
2,399 Listeners
7,859 Listeners
4,107 Listeners
87 Listeners
1,453 Listeners
8,761 Listeners
90 Listeners
353 Listeners
5,356 Listeners
15,023 Listeners
464 Listeners
128 Listeners
73 Listeners
433 Listeners