TYPE III AUDIO (All episodes)

""Carefully Bootstrapped Alignment" is organizationally hard" by Raemon


Listen Later

---
client: lesswrong
project_id: curated
feed_id: ai_safety ai_safety__governance
narrator: pw
qa: mds
qa_time: 0h30m
---
In addition to technical challenges, plans to safely develop AI face lots of organizational challenges. If you're running an AI lab, you need a concrete plan for handling that.

In this post, I'll explore some of those issues, using one particular AI plan as an example. I first heard this described by Buck at EA Global London, and more recently with OpenAI's alignment plan. (I think Anthropic's plan has a fairly different ontology, although it still ultimately routes through a similar set of difficulties)

I'd call the cluster of plans similar to this "Carefully Bootstrapped Alignment."

Original article:
https://www.lesswrong.com/posts/thkAtqoQwN6DtaiGT/carefully-bootstrapped-alignment-is-organizationally-hard

Narrated for LessWrong by TYPE III AUDIO.

Share feedback on this narration.

...more
View all episodesView all episodes
Download on the App Store

TYPE III AUDIO (All episodes)By TYPE III AUDIO