6 months ago I wrote Feedbackloop-first Rationality. I didn't followup on it for awhile (except for sporadic Deliberate (“Purposeful?”) Practice Club).
I just spent 6 weeks actually exploring "how would I build my own cognition training program?". In the process of doing so, I've iterated a bunch. I'm still in an orienting phase, but it seemed worth writing down the current stage of my thoughts.
What's my goal?
A rough overview:
- I want to get more, higher quality "X-risk thinker hours" hours.
- This includes AI alignment technical research, AI macrostrategy research, policy, governance, as well as people (such as Lightcone team) deciding which infrastructure to build,
- I'm particularly interested in getting more "serial research", as opposed to more "parallel research." We can throw more researchers at a problem, but if there are some problems that require one person to synthesize 10+ years of experience, all the parallel [...]
---
Outline:
(00:32) Whats my goal?
(01:28) Rationality for the sake of existential risk
(03:18) The Story So Far
(03:22) Feedback-loops and deliberate practice, vs Just Clicking
(05:00) What About CFAR? Didnt they teach just click skills?
(06:12) Hamming-nature, 10x plans, OODA Loops
(08:23) Planning vs OODA Loops
(09:29) My Process: Test Driven Development
(10:50) Alternate Strategies and/or Theories of Change
(11:38) #1: Help senior researchers with specific targeted problems.
(13:14) #2: Build a Thinking Assistant Pipeline
(15:23) #3. Learning Generalized Research Taste
(18:16) #4. Filtering/enculturation for Overall Community Epistemic Health
(19:17) #5. Investigating s factor?
(20:58) Itd be cool if a second group also worked towards rationality skill assessment.
(22:31) What Have I Actually Done?
(25:02) Whats Next?
The original text contained 1 footnote which was omitted from this narration.
---