
Sign up to save your podcasts
Or
---
client: lesswrong
project_id: curated
feed_id: ai_safety ai_safety__governance
narrator: pw
qa: mds
qa_time: 0h30m
---
In addition to technical challenges, plans to safely develop AI face lots of organizational challenges. If you're running an AI lab, you need a concrete plan for handling that.
In this post, I'll explore some of those issues, using one particular AI plan as an example. I first heard this described by Buck at EA Global London, and more recently with OpenAI's alignment plan. (I think Anthropic's plan has a fairly different ontology, although it still ultimately routes through a similar set of difficulties)
I'd call the cluster of plans similar to this "Carefully Bootstrapped Alignment."
Original article:
https://www.lesswrong.com/posts/thkAtqoQwN6DtaiGT/carefully-bootstrapped-alignment-is-organizationally-hard
Narrated for LessWrong by TYPE III AUDIO.
Share feedback on this narration.
---
client: lesswrong
project_id: curated
feed_id: ai_safety ai_safety__governance
narrator: pw
qa: mds
qa_time: 0h30m
---
In addition to technical challenges, plans to safely develop AI face lots of organizational challenges. If you're running an AI lab, you need a concrete plan for handling that.
In this post, I'll explore some of those issues, using one particular AI plan as an example. I first heard this described by Buck at EA Global London, and more recently with OpenAI's alignment plan. (I think Anthropic's plan has a fairly different ontology, although it still ultimately routes through a similar set of difficulties)
I'd call the cluster of plans similar to this "Carefully Bootstrapped Alignment."
Original article:
https://www.lesswrong.com/posts/thkAtqoQwN6DtaiGT/carefully-bootstrapped-alignment-is-organizationally-hard
Narrated for LessWrong by TYPE III AUDIO.
Share feedback on this narration.