The Nonlinear Library

LW - Boundary Placement Rebellion by tailcalled


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Boundary Placement Rebellion, published by tailcalled on July 20, 2023 on LessWrong.
The idea for this post all started because I was confused about the concept of "narcissism". I learned about "narcissism" from reading differential psychology, where they often measure it by asking people whether they agree with statements such as:
I have a natural talent for influencing people
Modesty doesn't become me
I think I am a special person
I really like to be the center of attention
In practice, empirically this correlates with being an assertive, confident person, which didn't match the discourse about narcissism, which typically seemed to more be about domestic abuse or people's ideologies, and it also doesn't AFAIK match the way "narcissism" gets used clinically, since clinical narcissists don't score higher than average on psychometric Narcissism scales used in personality psychology.
Eventually, something clicked about what people were saying about narcissism. They were talking about a dynamic that occurs when someone feels morally entitled to violate a boundary. Such a person can exhibit signs of narcissism because they become focused on enforcing their own views on the boundary, and put a lot of effort into gaining power to continue violating it.
Still, since I'm not sure whether what I'm talking about is actually "narcissism", I'm going to use a new, more descriptive term to refer to my concept: Boundary Placement Rebellion.
Boundary Placement Rebellion comes up a lot in my experience. It's a key issue in AI safety, as well as in civil rights, psychology research, hierarchies, families and other areas of society. For good and for bad, I think Boundary Placement Rebellion sees far more symmetry between the sides than "narcissism" does.
An Example: AI Safety vs Capabilities
Common sense - at least among many right-wing techies - is that you have the right to work on whatever software projects you want. If people don't like your software, they can just not buy it. There may be exceptions when it comes to software whose purpose is to be used as a weapon, such as ransomware. If new issues are discovered, then maybe we can update this norm, but we should also be careful to not strangle the tech industry with paperwork.
This is a boundary; you get sole decisive control over what you work on. AI capabilities research (as well as everything else in tech) makes a lot of use of this boundary, as it means that they can keep trying new things to push the state of the art. And of course earn lots of $$$ doing it.
Now suddenly a bunch of people are coming in, arguing that AI will lead to the end of the world! Suddenly, the boundary for AI capabilities researchers is being directly threatened. And they can't just be dismissed by saying "don't worry, I'm not gonna destroy the world", instead they will dump huge arguments for why it will inevitably happen. Or maybe some of them will say "you may be right, but we cannot know for sure, so we still gotta stop you".
The safety people's solution is that those who want to develop AI capabilities should either stop working on capabilities, or should pay the AI safety people engineer-level salaries to do philosophy, abstract mathematics, odd unprofitable ML experiments, premature safety tests, and similar.
If the AI capabilities researchers try to come up with different frames to make the AI safety people go away, then the AI safety people will just keep on pushing back against those frames, constantly coming up with reasons for why we're all gonna die anyway or whatever. And if you don't give them what they want, they're going to complain about you, sometimes even accusing you of being the person in the history of the world who has caused the most damage.
Broadening and Abstracting
The above shouldn't be seen as an argument against AI safet...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings