The Nonlinear Library: Alignment Forum

AF - 4. A Moral Case for Evolved-Sapience-Chauvinism by Roger Dearnaley


Listen Later

Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 4. A Moral Case for Evolved-Sapience-Chauvinism, published by Roger Dearnaley on November 24, 2023 on The AI Alignment Forum.
Part 4 of AI, Alignment, and Ethics. This will probably make more sense if you start with Part 1.
A society can use any criteria it likes for membership. The human moral intuition for fairness only really extends to "members of the same primate troupe as me". Modern high-tech societies have economic and stability incentives to extend this to every human in the entire global economic system, and the obvious limiting case for that is the entire planet, or if we go interplanetary, the entire solar system.
However, there is a concern that obviously arbitrary rules for membership of the society might not be stable under challenges like self-reflection or self-improvement by an advanced AI. One might like to think that if someone attempted to instill in all the AIs the rule that the criterion for being a citizen with rights was, say, descent from William Rockefeller Sr. (and hadn't actually installed this as part of a terminal goal, just injected it into the AI's human value learning process with a high prior), sooner or later as sufficiently smart AI would tell them "that's extremely convenient for your ruling dynasty, but doesn't fit the rest of human values, or history, or biology.
So it would be nice to have a criterion that makes some logical sense. Not necessarily a "True Name" of citizenship, but at least a solid rationally-defensible position with as little wiggle room as possible.
I'd like to propose what I think is one: "an intelligent agent should be assigned moral worth if it is (or primarily is, or is a functionally-equivalent very-high-accuracy emulation of) a member of sapient species whose drives were produced by natural selection. (This moral worth may vary if its drives or capabilities have been significantly modified, details TBD.)"
The argument defending this is as follows:
Living organisms have homeostasis mechanism: they seek to maintain aspects of their bodies and environment in certain states, even when (as is often the case) those are not thermodynamic equillibria. Unlike something weakly agentic like a thermostat, they are self propagating complex dynamic systems, and natural selection ensures that the equillibria they maintain are ones important to that process: they're not arbitrary, easily modified, or externally imposed, like those for a thermostat.
If you disturb any these equillibria they suffer, and if you disturb it too much, they die. ("Suffer" and "die" here should be regarded as technical terms in Biology, not as moral terms.) Living things have a lot of interesting properties (which is why Biology is a separate scientific field): for example, they're complex, self sustaining, dynamic processes that us evolutionary design algorithms. Also, humans generally think they're neat (at least unless the organism is prone to causing the human suffering).
'Sapient' is doing a lot of work in that definition, and its not currently scientifically a very well defined term. A short version of the definition that I mean here might be "having the same important social/technological properties that on Earth are currently unique to Homo sapiens, but are not inherently unique".
A more detailed definition would be "a species with the potential capability to transmit a lot more information from one generation to the next by cultural means than just by genetic mean". This is basically the necessary requirement for a species to become technological. A species that hasn't yet developed technology, but has this capability, still deserves moral worth. For comparison, we've tried teaching human (sign) languages to chimps, gorillas, and even dogs, and while they're not that bad at this, they clearly lack the level of mental/linguistic/social...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear Library: Alignment ForumBy The Nonlinear Fund


More shows like The Nonlinear Library: Alignment Forum

View all
AXRP - the AI X-risk Research Podcast by Daniel Filan

AXRP - the AI X-risk Research Podcast

9 Listeners