Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 3. Uploading, published by Roger Dearnaley on November 23, 2023 on The AI Alignment Forum.
Part 3 of AI, Alignment, and Ethics. This will probably make more sense if you start with Part 1.
One Upload, One Vote
Let us suppose that, sooner or later, we have some means of doing whole-brain uploading and simulation. Possibly the reading process is destructive, and produces about a liter of greyish-pink goo as a by-product, or possibly not.
Or perhaps it's more of an emulation effect, just one based on gigabytes, terabytes, or petabytes of data about an individual, rather than actually simulating every synapse and neurotransmitter flow, such that you end up with a sufficiently accurate emulation of their behavior.
Bearing in mind the Bitter Lesson, this might even be based on something like a conventional transformer architecture that doesn't look at all like the inner complexities of human brain, but which has been trained to emulate it in great detail (possibly pre-trained for humans in general, then fine-tuned on a specific individual), perhaps even down to some correlates of detailed neural firing patterns. The technical details don't really matter much.
I am not in fact a carbon-chauvinist. These uploads are person-like, intelligent and agentic systems, they have goals, they can even talk, and, importantly, their goals and desires are exactly what you'd expect for a human being.
They are a high fidelity-copy of a human, they will have all the same desires and drives as a human, and they will get upset if they're treated as slaves or second-class citizens, regardless of how much carbon there may or many not be in the computational substrate they're running on. Just like you or I would (or indeed as likely would any member of pretty-much any sapient species evolved via natural selection).
If someone knew in advance that uploading themself meant becoming a slave or a second-class citizen, they presumably wouldn't do it, perhaps short of this being the only way to cheat death. They'd also campaign, while they were still alive, for upload rights. So we need to either literally or effectively forbid uploading, or else we need give uploads human rights, as close as we reasonably can.
Unlike the situation for AIs, there is a very simple humna-fairness-instinct-compatible solution for how to count uploads in an ethical system. They may or may not have a body now, but they did once. So that's what gets counted: the original biological individual, back when they were individual. Then, if you destructively upload yourself, your upload inherits your vote and your human rights, is counted once in utility summations, and so forth.
If your upload then duplicates themself, backs themself up, or whatever, there's still only one vote/one set of human rights/one unit of moral worth to go around between the copies, and they or we need some rules for how to split or assign this. Or, if you non-destructively upload yourself, you still only have one vote/set of human rights/etc, and it's now somehow split or assigned between the biological original of you still running on your biological brain and the uploaded copy of you, or even multiple copies of your upload.
With this additional rule, then the necessary conditions for the human fairness instinct to make sense are both still obeyed in the presence of uploads: they care about the same good or bad things as us, and via this rule they can be counted. So that's really the only good moral solution that fir the human sense of fairness.
OK, so we give uploads votes/human rights/moral worth. What could go wrong?
I have seen people on Less Wrong assume that humans must automatically aligned with human values - I can only imagine on the basis that "they have human values, so they must be aligned to them" This is flat out, dangerously false. Please...