Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prizes for the 2021 Review, published by Raemon on February 10, 2023 on LessWrong.
If you received a prize, please fill out your payment contact email and PayPal.
A'ight, one final 2021 Review Roundup post – awarding prizes. I had a week to look over the results. The primary way I ranked posts was by a weighted score, which gave 1000+ karma users 3x the voting weight. Here was the distribution of votes:
I basically see two strong outlier posts at the top of the ranking, followed by a cluster of 6-7 posts, followed by a smooth tail of posts that were pretty good without any clear cutoff.
Post Prizes
Gold Prize Posts
Two posts stood noticeably out above all the others, which I'm awarding $800 to.
Strong Evidence is Common by Mark Xu
“PR” is corrosive; “reputation” is not, by Anna Salamon.
I also particularly liked Akash's review.
Silver Prize Posts
And the second (eyeballed) cluster of posts, each getting $600, is:
Your Cheerful Price, by Eliezer Yudkowsky.
This notably had the most reviews – a lot of people wanted to weigh in and say "this personally helped me", often with some notes or nuance.
ARC's first technical report: Eliciting Latent Knowledge by Paul Christiano, Ajeya Cotra and Mark Xu.
This Can't Go On by Holden Karnofsky
Rationalism before the Sequences, by Eric S Raymond.
I liked this review by A Ray who noted one source of value here is the extensive bibliography.
Lies, Damn Lies, and Fabricated Options, by Duncan Sabien
Fun with +12 OOMs of Compute, by Daniel Kokotajlo.
Nostalgebraist's review was particularly interesting.
What 2026 looks like by Daniel Kokotajlo
Ngo and Yudkowsky on alignment difficulty. This didn't naturally cluster into the same group of vote-totals as the other silver-prizes, but it was in the top 10. I think the post was fairly hard to read, and didn't have easily digestible takeaways, but nonetheless I think this kicked off some of the most important conversations in the AI Alignment space and warrants inclusion in this tier.
Bronze Prize Posts
Although there's not a clear clustering after this point, when I eyeball how important the next several posts were, it seems to me appropriate to give $400 to each of:
How To Write Quickly While Maintaining Epistemic Rigor, by John Wentworth
Science in a High-Dimensional World by John Wentworth
How factories were made safe by Jason Crawford
Cryonics signup guide #1: Overview by Mingyuan
Making Vaccine by John Wentworth
Taboo "Outside View" by Daniel Kokotaljo
All Possible Views About Humanity's Future Are Wild by Holden Karnofsky
Another (outer) alignment failure story by Paul Christiano
Split and Commit by Duncan Sabien
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) by Andrew Critch
There’s no such thing as a tree (phylogenetically), by eukaryote
The Plan by John Wentworth
Trapped Priors As A Basic Problem Of Rationality by Scott Alexander
Finite Factored Sets by Scott Garrabrant
Selection Theorems: A Program For Understanding Agents by John Wentworth
Slack Has Positive Externalities For Groups by John Wentworth
My research methodology by Paul Christiano
Honorable Mentions
This final group has the most arbitrary cutoff at all, and includes some judgment calls about how many medium or strong votes it had, among 1000+ karma users, and in some edge cases my own subjective guess of how important it was.
These authors each get $100 per post.
The Rationalists of the 1950s (and before) also called themselves “Rationalists” by Owain Evans
Ruling Out Everything Else by Duncan Sabien
Leaky Delegation: You are not a Commodity by Darmani
Feature Selection by Zack Davis
Cup-Stacking Skills (or, Reflexive Involuntary Mental Motions) by Duncan Sabien
larger language models may disappoint you [or, an eternally unfinished draft] by Nostalgebraist
Self-Integrity and the Drowni...