LessWrong (Curated & Popular)

"Acausal normalcy" by Andrew Critch


Listen Later

https://www.lesswrong.com/posts/3RSq3bfnzuL3sp46J/acausal-normalcy

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This post is also available on the EA Forum.

Summary: Having thought a bunch about acausal trade — and proven some theorems relevant to its feasibility — I believe there do not exist powerful information hazards about it that stand up to clear and circumspect reasoning about the topic.  I say this to be comforting rather than dismissive; if it sounds dismissive, I apologize.  

With that said, I have four aims in writing this post:

  1. Dispelling myths.  There are some ill-conceived myths about acausal trade that I aim to dispel with this post.  Alternatively, I will argue for something I'll call acausal normalcy as a more dominant decision-relevant consideration than one-on-one acausal trades.  
  2. Highlighting normalcy.  I'll provide some arguments that acausal normalcy is more similar to human normalcy than any particular acausal trade is to human trade, such that the topic of acausal normalcy is — conveniently — also less culturally destabilizing than (erroneous) preoccupations with 1:1 acausal trades. 
  3. Affirming AI safety as a straightforward priority.  I'll argue that for most real-world-prevalent perspectives on AI alignment, safety, and existential safety, acausal considerations are not particularly dominant, except insofar as they push a bit further towards certain broadly agreeable human values applicable in the normal-everyday-human-world, such as nonviolence, cooperation, diversity, honesty, integrity, charity, and mercy.  In particular, I do not think acausal normalcy provides a solution to existential safety, nor does it undermine the importance of existential safety in some surprising way.
  4. Affirming normal human kindness.  I also think reflecting on acausal normalcy can lead to increased appreciation for normal notions of human kindness, which could lead us all to treat each other a bit better.  This is something I wholeheartedly endorse.
...more
View all episodesView all episodes
Download on the App Store

LessWrong (Curated & Popular)By LessWrong

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

12 ratings


More shows like LessWrong (Curated & Popular)

View all
Macro Voices by Hedge Fund Manager Erik Townsend

Macro Voices

3,071 Listeners

Odd Lots by Bloomberg

Odd Lots

1,930 Listeners

EconTalk by Russ Roberts

EconTalk

4,265 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,452 Listeners

Philosophy Bites by Edmonds and Warburton

Philosophy Bites

1,547 Listeners

ChinaTalk by Jordan Schneider

ChinaTalk

288 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

95 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

96 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

525 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

138 Listeners

Razib Khan's Unsupervised Learning by Razib Khan

Razib Khan's Unsupervised Learning

209 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

151 Listeners

Money Stuff: The Podcast by Bloomberg

Money Stuff: The Podcast

393 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

134 Listeners

The Marginal Revolution Podcast by Mercatus Center at George Mason University

The Marginal Revolution Podcast

96 Listeners