
Sign up to save your podcasts
Or
It's quite possible someone has already argued this, but I thought I should share just in case not.
Goal-Optimisers and Planner-Simulators
When people in the past discussed worries about AI development, this was often about AI agents - AIs that had goals they were attempting to achieve, objective functions they were trying to maximise. At the beginning we would make fairly low-intelligence agents, which were not very good at achieving things, and then over time we would make them more and more intelligent. At some point around human-level they would start to take-off, because humans are approximately intelligent enough to self-improve, and this would be much easier in silicon.
This does not seem to be exactly how things have turned out. We have AIs that are much better than humans at many things, such that if a human had these skills we would think they were extremely capable. And [...]
---
Outline:
(00:10) Goal-Optimisers and Planner-Simulators
(01:17) What is the significance for existential risk?
(02:43) How could we react to this?
---
First published:
Source:
Narrated by TYPE III AUDIO.
It's quite possible someone has already argued this, but I thought I should share just in case not.
Goal-Optimisers and Planner-Simulators
When people in the past discussed worries about AI development, this was often about AI agents - AIs that had goals they were attempting to achieve, objective functions they were trying to maximise. At the beginning we would make fairly low-intelligence agents, which were not very good at achieving things, and then over time we would make them more and more intelligent. At some point around human-level they would start to take-off, because humans are approximately intelligent enough to self-improve, and this would be much easier in silicon.
This does not seem to be exactly how things have turned out. We have AIs that are much better than humans at many things, such that if a human had these skills we would think they were extremely capable. And [...]
---
Outline:
(00:10) Goal-Optimisers and Planner-Simulators
(01:17) What is the significance for existential risk?
(02:43) How could we react to this?
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,420 Listeners
2,387 Listeners
7,893 Listeners
4,132 Listeners
87 Listeners
1,459 Listeners
9,040 Listeners
87 Listeners
390 Listeners
5,431 Listeners
15,216 Listeners
476 Listeners
121 Listeners
75 Listeners
459 Listeners