Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Correctly Calibrated Trust, published by habryka on June 24, 2023 on LessWrong.
Chana from the CEA Community Health team posted this to the EA Forum, where it sadly seems to have not gotten a lot of traction. I actually think it's a quite important post, so I am signal-boosting it here. On the surface level it talks a lot about EA, but I a lot of it also straightforwardly implies to the AI Alignment or Rationality communities, and as such are also of relevance to a lots of readers on LessWrong.
Below I shamelessly copied over the whole post content (except the footnotes, since they were hard to copy-paste):
This post comes from finding out that Asya Bergal was having thoughts about this and was maybe going to write a post, thoughts I was having along similar lines, and a decision to combine energy and use the strategy fortnight as an excuse to get something out the door. A lot of this is written out of notes I took from a call with her, so she get credit for a lot of the concrete examples and the impetus for writing a post shaped like this.
Interested in whether this resonates with people's experience!
Short version:
[Just read the bold to get a really short version]
There’s a lot of “social sense of trust” in EA, in my experience. There’s a feeling that people, organizations and projects are broadly good and reasonable (often true!) that’s based on a combination of general vibes, EA branding and a few other specific signals of approval, as well as an absence of negative signals. I think that it’s likely common to overweight those signals of approval and the absence of disapproval.
Especially post-FTX, I’d like us to be well calibrated on what the vague intuition we download from the social web is telling us, and place trust wisely.
[“Trust” here is a fuzzy and under-defined thing that I’m not going to nail down - I mean here something like a general sense that things are fine and going well]
Things like getting funding, being highly upvoted on the forum, being on podcasts, being high status and being EA-branded are fuzzy and often poor proxies for trustworthiness and of relevant people’s views on the people, projects and organizations in question[1].
Negative opinions (anywhere from “that person not so great” to “that organization potentially quite sketch, but I don't have any details”) are not necessarily that likely to find their way to any given person for a bunch of reasons, and we don’t have great solutions to collecting and acting on character evidence that doesn't come along with specific bad actions. It’s easy to overestimate what you would know if there’s a bad thing to know.
If it’s decision relevant or otherwise important to know how much to trust a person or organization, I think it’s a mistake to rely heavily on the above indicators, or on the “general feeling” in EA. Instead, get data if you can, and ask relevant people their actual thoughts - you might find them surprisingly out of step with what the vibe would indicate.
I’m pretty unsure what we can or should do as a community about this, but I have a few thoughts at the bottom, and having a post about it as something to point to might help.
Longer version:
I think you'll get plenty out of this if you read the headings and read more under each heading if something piques your curiosity
Part 1: What fuzzy proxies are people using and why would they be systematically overweighted?
(I don’t know how common these mistakes are, or that they apply to you, the specific reader of the post. I expect them to bite harder if you’re newer or less connected, but I also expect that it’s easy to be somewhat biased in the same directions even if you have a lot of context. I’m hoping this serves as contextualization for the former and a reminder / nudge for the latter.)
Getting funding from OP and LTFF
Seems easy to e...