
Sign up to save your podcasts
Or
A recent short story by Gabriel Mukobi illustrates a near-term scenario where things go bad because new developments in LLMs allow LLMs to accelerate capabilities research without a correspondingly large acceleration in safety research.
This scenario is disturbingly close to the situation we already find ourselves in. Asking the best LLMs for help with programming vs technical alignment research feels very different (at least to me). LLMs might generate junk code, but you can keep pointing out the problems with the code, and the code will eventually work. This can be faster than doing it myself, in cases where I don't know a language or library well; the LLMs are moderately familiar with everything.
When I try to talk to LLMs about technical AI safety work, however, I just get garbage.
I think a useful safety precaution for frontier AI models would be to make them more useful for [...]
---
Outline:
(01:19) What is wrong with current models?
(04:08) Is there any hope for better?
(06:12) Against Autonomy
(06:32) Servant vs Collaborator
(09:00) Notions of Alignment
(11:11) The Need for an Independent Safety Approach
(12:29) Engineer-Researcher Collaboration
(13:58) Aside: Hallucinations
(15:58) Updating (on relatively small amounts of text)
(18:52) Feedback Tools
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
A recent short story by Gabriel Mukobi illustrates a near-term scenario where things go bad because new developments in LLMs allow LLMs to accelerate capabilities research without a correspondingly large acceleration in safety research.
This scenario is disturbingly close to the situation we already find ourselves in. Asking the best LLMs for help with programming vs technical alignment research feels very different (at least to me). LLMs might generate junk code, but you can keep pointing out the problems with the code, and the code will eventually work. This can be faster than doing it myself, in cases where I don't know a language or library well; the LLMs are moderately familiar with everything.
When I try to talk to LLMs about technical AI safety work, however, I just get garbage.
I think a useful safety precaution for frontier AI models would be to make them more useful for [...]
---
Outline:
(01:19) What is wrong with current models?
(04:08) Is there any hope for better?
(06:12) Against Autonomy
(06:32) Servant vs Collaborator
(09:00) Notions of Alignment
(11:11) The Need for an Independent Safety Approach
(12:29) Engineer-Researcher Collaboration
(13:58) Aside: Hallucinations
(15:58) Updating (on relatively small amounts of text)
(18:52) Feedback Tools
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,446 Listeners
2,389 Listeners
7,910 Listeners
4,136 Listeners
87 Listeners
1,462 Listeners
9,095 Listeners
87 Listeners
389 Listeners
5,432 Listeners
15,174 Listeners
474 Listeners
121 Listeners
75 Listeners
459 Listeners