The Nonlinear Library: Alignment Forum

AF - 5. Moral Value for Sentient Animals? Alas, Not Yet by Roger Dearnaley


Listen Later

Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 5. Moral Value for Sentient Animals? Alas, Not Yet, published by Roger Dearnaley on December 27, 2023 on The AI Alignment Forum.
Part 5 of AI, Alignment, and Ethics. This will probably make more sense if you start with Part 1.
TL;DR In Parts 1 through 3 I discussed principles for ethical system design, and the consequences for AIs and uploads, and in Part 4, I discussed a principled way for us to grant moral value/weight to a larger set than just biological humans: all evolved sapient beings (other than ones of types we cannot form a cooperative alliance with).
The history of liberal thought so far has been a progressive expansion of the set of beings accorded moral value (starting from just the set of privileged male landowners having their own military forces).
So, how about animals? Could we expand our moral set to all evolved sentient beings, now that we have textured vegetable protein? I explore some of the many consequences if we tried this, and show that it seems to be incredibly hard to construct and implement such a moral system that doesn't lead to human extinction, mass extinctions of animal species, ecological collapses, or else to very ludicrous outcomes.
Getting anything close to a good outcome clearly requires at least astonishing levels of complex layers of carefully-tuned fudge factors baked into your ethical system, and also extremely advanced technology. A superintelligence with access to highly advanced nanotechnology and genetic engineering might be able to construct and even implement such a system, but short of that technological level, it's sadly impractical. So I regretfully fall back to the long-standing solution of donating second-hand moral worth from humans to animals, especially large, photogenic, cute, fluffy animals (or at least ones visible with the naked eye) because humans care about their well-being.
[About a decade ago, I spent a good fraction of a year trying to construct an ethical system along these lines, before coming to the sad conclusion that it was basically impossible. I skipped over explaining this when writing Part 4, assuming that the fact this approach is unworkable was obvious, or at least uninteresting.
A recent conversation has made it clear to me that this is not obvious, and furthermore that not understanding this is both an x-risk, and also common among current academic moral philosophers - thus I am adding this post. Consider it a write-up of a negative result in ethical-system design.
This post follows on logically from Part 4. so is numbered Part 5, but it was written after Part 6 (which was originally numbered Part 5 before I inserted this post into the sequence).]
Sentient Rights?
'sentient': able to perceive or feel things - Oxford Languages
The word 'sentient' is rather a slippery one. Beyond being "able to perceive or feel things", the frequently-mentioned specific of being "able to feel pain or distress" also seems rather relevant, especially in a moral setting. Humans, mammals, and birds are all clearly sentient under this definition, and also in the the common usage of the word. Few people would try to claim that insects weren't: bees, ants, even dust mites. How about flatworms? Water fleas? C.
Obviously if we're going to use this as part of the definition of an ethical system that we're designing, we're going to need to pick a clear definition.
For now, let's try make this as easy as we can on ourselves and pick a logical and fairly restrictive definition: to be 'sentient' for our purposes, an organism needs to a) be a multicellular animal (a metazoan), with b) an identifiable nervous system containing multiple neurons, and c) use this nervous system in a manner that at least suggests that it has senses and acts on these in ways evolved to help ensure its survival or genetic fitness (as one would exp...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear Library: Alignment ForumBy The Nonlinear Fund


More shows like The Nonlinear Library: Alignment Forum

View all
AXRP - the AI X-risk Research Podcast by Daniel Filan

AXRP - the AI X-risk Research Podcast

9 Listeners