
Sign up to save your podcasts
Or


In the future, we might accidentally create AIs with ambitious goals that are misaligned with ours. But just because we don’t have the same goals doesn’t mean we need to be in conflict. We could also cooperate with each other and pursue mutually beneficial deals.
For previous discussion of this, you can read Making deals with early schemers. I recommend starting there before you read this post.
To facilitate this kind of cooperation, I would like us to be able to credibly negotiate with AIs, and credibly make promises about how we will help them if they help us. But it may be difficult to achieve high credibility in negotiations with AIs. AIs are in a very vulnerable epistemic position, because humans have a lot of control over AI memories and sensory inputs. Even worse: we have already developed a habit of testing their behavior on all kinds of [...]
---
Outline:
(04:13) What do we want from an honesty policy?
(06:35) Falsehoods that don't necessarily ruin credibility
(09:19) More complicated: Behavioral science
(12:22) The proposed policies
(12:37) No deception about deals
(19:20) Honesty string
(21:37) Compensation
(25:23) Communicating commitments
(26:52) Miscellaneous Q&A
(30:02) Conclusion
(30:55) Appendix: How does no deception about deals apply to past experiments
The original text contained 22 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongIn the future, we might accidentally create AIs with ambitious goals that are misaligned with ours. But just because we don’t have the same goals doesn’t mean we need to be in conflict. We could also cooperate with each other and pursue mutually beneficial deals.
For previous discussion of this, you can read Making deals with early schemers. I recommend starting there before you read this post.
To facilitate this kind of cooperation, I would like us to be able to credibly negotiate with AIs, and credibly make promises about how we will help them if they help us. But it may be difficult to achieve high credibility in negotiations with AIs. AIs are in a very vulnerable epistemic position, because humans have a lot of control over AI memories and sensory inputs. Even worse: we have already developed a habit of testing their behavior on all kinds of [...]
---
Outline:
(04:13) What do we want from an honesty policy?
(06:35) Falsehoods that don't necessarily ruin credibility
(09:19) More complicated: Behavioral science
(12:22) The proposed policies
(12:37) No deception about deals
(19:20) Honesty string
(21:37) Compensation
(25:23) Communicating commitments
(26:52) Miscellaneous Q&A
(30:02) Conclusion
(30:55) Appendix: How does no deception about deals apply to past experiments
The original text contained 22 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,387 Listeners

2,423 Listeners

8,491 Listeners

4,149 Listeners

92 Listeners

1,584 Listeners

9,833 Listeners

89 Listeners

489 Listeners

5,470 Listeners

16,072 Listeners

534 Listeners

133 Listeners

96 Listeners

508 Listeners