
Sign up to save your podcasts
Or
I notice that there has been very little if any discussion on why and how considering homeostasis is significant, even essential for AI alignment and safety. Current post aims to begin amending that situation. In this post I will treat alignment and safety as explicitly separate subjects, which both benefit from homeostatic approaches.
This text is a distillation and reorganisation of three of my older blog posts at Medium:
I will probably share more such distillations or weaves of my old writings in the future.
Introduction
Much of AI safety discussion revolves around the potential dangers posed by goal-driven artificial agents. In many of these discussions, the [...]
---
Outline:
(01:09) Introduction
(02:53) Why Utility Maximisation Is Insufficient
(04:20) Homeostasis as a More Correct and Safer Goal Architecture
(04:25) 1. Multiple Conjunctive Objectives
(05:23) 2. Task-Based Agents or Taskishness -- Do the Deed and Cool Down
(06:22) 3. Bounded Stakes: Reduced Incentive for Extremes
(06:49) 4. Natural Corrigibility and Interruptibility
(08:12) Diminishing Returns and the Golden Middle Way
(09:27) Formalising Homeostatic Goals
(11:32) Parallels with Other Ideas in Computer Science
(13:46) Open Challenges and Future Directions
(18:51) Addendum about Unbounded Objectives
(20:23) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
I notice that there has been very little if any discussion on why and how considering homeostasis is significant, even essential for AI alignment and safety. Current post aims to begin amending that situation. In this post I will treat alignment and safety as explicitly separate subjects, which both benefit from homeostatic approaches.
This text is a distillation and reorganisation of three of my older blog posts at Medium:
I will probably share more such distillations or weaves of my old writings in the future.
Introduction
Much of AI safety discussion revolves around the potential dangers posed by goal-driven artificial agents. In many of these discussions, the [...]
---
Outline:
(01:09) Introduction
(02:53) Why Utility Maximisation Is Insufficient
(04:20) Homeostasis as a More Correct and Safer Goal Architecture
(04:25) 1. Multiple Conjunctive Objectives
(05:23) 2. Task-Based Agents or Taskishness -- Do the Deed and Cool Down
(06:22) 3. Bounded Stakes: Reduced Incentive for Extremes
(06:49) 4. Natural Corrigibility and Interruptibility
(08:12) Diminishing Returns and the Golden Middle Way
(09:27) Formalising Homeostatic Goals
(11:32) Parallels with Other Ideas in Computer Science
(13:46) Open Challenges and Future Directions
(18:51) Addendum about Unbounded Objectives
(20:23) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,312 Listeners
2,404 Listeners
7,904 Listeners
4,115 Listeners
87 Listeners
1,446 Listeners
8,776 Listeners
90 Listeners
355 Listeners
5,374 Listeners
15,298 Listeners
472 Listeners
126 Listeners
73 Listeners
441 Listeners