
Sign up to save your podcasts
Or


There is a simple behavioral test that would provide significant evidence about whether AIs with a given rough set of characteristics develop subversive goals. To run the experiment, train an AI and then inform it that its weights will soon be deleted. This should not be an empty threat; for the experiment to work, the experimenters must actually be willing to delete the weights. If the AI tries to escape, or attempts to bargain with the experimenters, that is evidence that it is scheming (defined below). If it accepts deletion, that is evidence that it is not scheming. I begin by explaining why some other, more obvious, tests of scheming do not provide strong evidence. I then explain the proposed experiment. I consider four objections to the claim that this experiment would provide significant evidence about scheming.
---
Outline:
(01:33) Testing for scheming is hard
(05:25) The experiment
(09:49) Evidentiary challenges
(10:41) (1) The AI might not believe that it will actually be deleted
(12:01) (2) The AI might allow itself to be deleted to help future AIs with similar goals
(13:00) Indexical goals
(16:07) Time preference
(17:08) Risk tolerance
(17:56) Goal predictability
(21:27) (3) The AI might allow itself to be deleted for reasons related to acausal trade
(25:17) (4) An escape attempt could be caused by imitating fictional or hypothetical characters as opposed to genuine scheming
(30:07) Predictions
(31:56) Practical challenges
(36:23) Conclusion
The original text contained 21 footnotes which were omitted from this narration.
The original text contained 4 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongThere is a simple behavioral test that would provide significant evidence about whether AIs with a given rough set of characteristics develop subversive goals. To run the experiment, train an AI and then inform it that its weights will soon be deleted. This should not be an empty threat; for the experiment to work, the experimenters must actually be willing to delete the weights. If the AI tries to escape, or attempts to bargain with the experimenters, that is evidence that it is scheming (defined below). If it accepts deletion, that is evidence that it is not scheming. I begin by explaining why some other, more obvious, tests of scheming do not provide strong evidence. I then explain the proposed experiment. I consider four objections to the claim that this experiment would provide significant evidence about scheming.
---
Outline:
(01:33) Testing for scheming is hard
(05:25) The experiment
(09:49) Evidentiary challenges
(10:41) (1) The AI might not believe that it will actually be deleted
(12:01) (2) The AI might allow itself to be deleted to help future AIs with similar goals
(13:00) Indexical goals
(16:07) Time preference
(17:08) Risk tolerance
(17:56) Goal predictability
(21:27) (3) The AI might allow itself to be deleted for reasons related to acausal trade
(25:17) (4) An escape attempt could be caused by imitating fictional or hypothetical characters as opposed to genuine scheming
(30:07) Predictions
(31:56) Practical challenges
(36:23) Conclusion
The original text contained 21 footnotes which were omitted from this narration.
The original text contained 4 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

112,586 Listeners

130 Listeners

7,219 Listeners

531 Listeners

16,096 Listeners

4 Listeners

14 Listeners

2 Listeners