The Nonlinear Library: Alignment Forum

AF - 0th Person and 1st Person Logic by Adele Lopez


Listen Later

Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 0th Person and 1st Person Logic, published by Adele Lopez on March 10, 2024 on The AI Alignment Forum.
Truth values in classical logic have more than one interpretation.
In 0th Person Logic, the truth values are interpreted as True and False.
In 1st Person Logic, the truth values are interpreted as Here and Absent relative to the current reasoner.
Importantly, these are both useful modes of reasoning that can coexist in a logical embedded agent.
This idea is so simple, and has brought me so much clarity that I cannot see how an adequate formal theory of anthropics could avoid it!
Crash Course in Semantics
First, let's make sure we understand how to connect logic with meaning. Consider classical propositional logic. We set this up formally by defining terms, connectives, and rules for manipulation. Let's consider one of these terms: A. What does this mean? Well, its meaning is not specified yet!
So how do we make it mean something? Of course, we could just say something like "Arepresents the statement that 'a ball is red'". But that's a little unsatisfying, isn't it? We're just passing all the hard work of meaning to English.
So let's imagine that we have to convey the meaning of A without using words. We might draw pictures in which a ball is red, and pictures in which there is not a red ball, and say that only the former are A. To be completely unambiguous, we would need to consider all the possible pictures, and point out which subset of them are A. For formalization purposes, we will say that this set is the meaning of A.
There's much more that can be said about semantics (see, for example, the Highly Advanced Epistemology 101 for Beginners sequence), but this will suffice as a starting point for us.
0th Person Logic
Normally, we think of the meaning of A as independent of any observers. Sure, we're the ones defining and using it, but it's something everyone can agree on once the meaning has been established. Due to this independence from observers, I've termed this way of doing things 0th Person Logic (or 0P-logic).
The elements of a meaning set I'll call worlds in this case, since each element represents a particular specification of everything in the model. For example, say that we're only considering states of tiles on a 2x2 grid. Then we could represent each world simply by taking a snapshot of the grid.
From logic, we also have two judgments. A is judged True for a world iff that world is in the meaning of A. And False if not. This judgement does not depend on who is observing it; all logical reasoners in the same world will agree.
1st Person Logic
Now let's consider an observer using logical reasoning. For metaphysical clarity, let's have it be a simple, hand-coded robot. Fix a set of possible worlds, assign meanings to various symbols, and give it the ability to make, manipulate, and judge propositions built from these.
Let's give our robot a sensor, one that detects red light. At first glance, this seems completely unproblematic within the framework of 0P-logic.
But consider a world in which there are three robots with red light sensors. How do we give A the intuitive meaning of "my sensor sees red"? The obvious thing to try is to look at all the possible worlds, and pick out the ones where the robot's sensor detects red light. There are three different ways to do this, one for each instance of the robot.
That's not a problem if our robot knows which robot it is. But without sensory information, the robot doesn't have any way to know which one it is! There may be both robots which see a red signal, and robots which do not - and nothing in 0P-Logic can resolve this ambiguity for the robot, because this is still the case even if the robot has pinpointed the exact world it's in!
So statements like "my sensor sees red" aren't actually picking out subsets of w...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear Library: Alignment ForumBy The Nonlinear Fund


More shows like The Nonlinear Library: Alignment Forum

View all
AXRP - the AI X-risk Research Podcast by Daniel Filan

AXRP - the AI X-risk Research Podcast

9 Listeners