Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Gemini modeling, published by Tsvi Benson-Tilsen on January 22, 2023 on The AI Alignment Forum.
[Metadata: crossposted from. Written 17 June 2022. I'm likely to not respond to comments promptly.]
A gemini model is a kind of model that's especially relevant for minds modeling minds.
Two scenarios
You stand before a tree. How big is it? How does it grow? What can it be used to make? Where will it fall if you cut it here or there?
Alice usually eats Cheerios in the morning. Today she comes downstairs, but doesn't get out a bowl and spoon and milk, and doesn't go over to the Cheerios cupboard. Then she sees green bananas on the counter. Then she goes and gets a bowl and spoon and milk, and gets a box of Cheerios from the cupboard. What happened?
We have some kind of mental model of the tree, and some kind of mental model of Alice. In the Cheerios scenario, we model Alice by calling on ourselves, asking how we would behave; we find that we'd behave like Alice if we liked Cheerios, and believed that today there weren't Cheerios in the cupboard, but then saw the green bananas and inferred that Bob had gone to the grocery store, and inferred that actually there were Cheerios. This seems different from how we model the tree; we're not putting ourselves in the tree's shoes.
Gemini modeling and empathic modeling
What's the difference though, really, between these two ways of modeling? Clearly Alice is like us in a way the tree isn't, and we're using that somehow; we're modeling Alice using empathy ("in-feeling"). This essay describes another related difference:
We model Alice's belief in a proposition by having in ourselves another instance of that proposition.
(Or: by having in ourselves the same proposition, or a grasping of that proposition.) We don't model the tree by having another instance of part of the tree in us. I call modeling some thing by having inside oneself another instance of the thing--having a twin of it--"gemini modeling".
Gemini modeling is different from empathic modeling. Empathic modeling is tuning yourself to be like another agent in some respects, so that their behavior is explainable as what [you in your current tuning] would do. This is a sort of twinning, broadly, but you're far from identical to the agent you're modeling; you might make different tradeoffs, have different sense acuity, have different concepts, believe different propositions, have different skills, and so on; you tune yourself enough that those differences don't intrude on your predictions. Whereas, the proposition "There are Cheerios in the cupboard.", with its grammatical structure and its immediate implications for thought and action, can be roughly identical between you and Alice.
As done by humans modeling humans, empathic modeling may or may not involve gemini modeling: we model Alice by seeing how we'd act if we believed certain propositions, and those propositions are gemini modeled; on the other hand, we could do an impression of a silly friend by making ourselves "more silly", which is maybe empathic modeling without gemini modeling. And, gemini modeling done by humans modeling humans involves empathic modeling: to see the implications of believing in a proposition or caring about something, we access our (whole? partial?) agentic selves, our agency.
Gemini modeling vs. general modeling
In some sense we make a small part of ourselves "like a tree" when we model a tree falling: our mental model of the tree supports [modeled forces] having [modeled effects] with resulting [modeled dynamics], analogous to how the actual tree moves when under actual forces. So what's different between gemini modeling and any other modeling? When modeling a tree, or a rock, or anything, don't we have a little copy or representation of some aspects of the thing in us? Isn't that like having a so...