The Nonlinear Library

EA - Slim overview of work one could do to make AI go better (and a grab-bag of other career considerations) by Chi


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Slim overview of work one could do to make AI go better (and a grab-bag of other career considerations), published by Chi on March 22, 2024 on The Effective Altruism Forum.
Many kinds of work one could do to make AI go better and a grab-bag of other career considerations
I recently found myself confused about what I'd like to work on. So, I made an overview with the possible options for what to work on to make AI go well. I thought I'd share it in case it's helpful for other people. Since I made this overview for my own career deliberations, it is tailored for myself and not necessarily complete. That said, I tried to be roughly comprehensive, so feel free to point out options I'm missing.
I redacted some things but didn't edit the doc in other ways to make it more comprehensible to others. In case you're interested, I explain a lot of the areas in the "Humans in control" and the "Misalignment" worlds here and to some extent here.
What areas could one work on? What endpoints or intermediary points could one aim for?
Note that I redacted a bunch of names in "Who's working on this" just because I didn't want to bother asking them and I wasn't sure they had publicly talked about it yet, not because of anything else.
"?" behind a name or org means I don't know if they actually work on the thing (but you could probably find out with a quick google!)
World it helps
The area (Note that this doesn't say anything about the type of work at the moment. For example, I probably should never do MechInterp myself because of personal fit. But I could still think it's good to do something that overall supports MechInterp.)
Biggest uncertainty
Who's working on this
Hu- mans in con- trol
ASI governance | human-control
Who is in control of AI, what's the governance structure etc.
Digital sentience
[...]
Is this tractable and is success path-dependent?
Will MacAskill, [redacted]?, indirectly: cybersec. folk?, some AI governance work?
Acausal interactions | human-control
Metacognition
Decision theory
Values of future civilisation
SPIs
[redacted]
SPIs for causal interactions | human-control
CLR
Mis- align- ment
Prevent sign flip and other near misses
Is this a real concern?
Nobody?
Acausal interactions | misalignment
Decision theory
Value porosity
Is this tractable?
[redacted]? [redacted]?
Reducing conflict-conducive preferences for causal interactions & SPIs | misalignment
CLR
Main- stream AI safety best thing to work on
Reduction of malevolence in positions of influence through improving awareness (also goes into the "Humans in control" category)
[redacted]? Nobody?
Differentially support responsible AI labs
For some of these: Would success be net good or net bad?
If good: How good?
How high is the penalty for being less neglected?
Influence AI timelines
[redacted], [redacted], [redacted]?, maybe misc. policy people?
AI control (and ideas like paying AIs)
Redwood Research
Model capabilities evaluations
METR, Apollo?, maybe AI labs policy teams, maybe misc. Other policy people?
Alignment (more comprehensive overview):
MechInterp
ELK
(L)AT
Debate
COT oversight
Infrabayesianism
Natural abstractions
Understanding intelligence
[...]
Overview post on LessWrong
Human epistemics during early AI
~Forecasting crowd, nobody?
Growing the AI safety and EA community or improving its branding or upskilling people in the community (e.g. fellowships)
Constellation, Local groups, CEA, OpenPhilanthropy, …
Improving the AI safety and EA community and culture socially
CEA
Threat modelling, scenario forecasting etc.
[redacted], …
Make it harder to steal models
Cybersecurity folk
Regulate Open Source capabilities
Policy folk? Nobody?
What types of work are there?
Which world
Type of work
Broad category of work
Can be in any of the three areas above
Offering 1-1 support (mental, operational, and debugging)
Proj...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings