
Sign up to save your podcasts
Or


This is my attempt to write down what I would be researching, if I were working directly with LLMs rather than doing Agent Foundations. (I'm open to collaboration on these ideas.)
Machine Learning research can occupy different points on a spectrum between science and engineering: science-like research seeks to understand phenomena deeply, explain what's happening, provide models which predict results, etc. Engineering-like research focuses more on getting things to work, achieving impressive results, optimizing performance, etc. I think the scientific style is very important. However, the research threads here are more engineering-flavored: I'd like to see systems which get these ideas to work, because I think they'd be marginally safer, saving a few more worlds along the alignment difficulty spectrum. I think the forefront of AI capabilities research is currently quite focused on RL, which is an inherently more dangerous technology; part of what I hope to illustrate here is that there is low-hanging capability fruit in other directions.
When you ask, what answers?
Base models are the best, most advanced statistical models humans have ever created. However, we don't use them that way. Instead, we use them as weight initializations for training chatbots. The statistical integrity is compromised [...]
---
Outline:
(01:14) When you ask, what answers?
(04:55) Partially Labeled Data
(07:32) Invertibility
(11:57) Conditioning
(14:41) Transitivity
(16:00) Entropy Preservation
(21:59) Self-Knowledge
(23:08) Paraphrase Invariance
(28:53) What about chat?
(29:31) What about safety?
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongThis is my attempt to write down what I would be researching, if I were working directly with LLMs rather than doing Agent Foundations. (I'm open to collaboration on these ideas.)
Machine Learning research can occupy different points on a spectrum between science and engineering: science-like research seeks to understand phenomena deeply, explain what's happening, provide models which predict results, etc. Engineering-like research focuses more on getting things to work, achieving impressive results, optimizing performance, etc. I think the scientific style is very important. However, the research threads here are more engineering-flavored: I'd like to see systems which get these ideas to work, because I think they'd be marginally safer, saving a few more worlds along the alignment difficulty spectrum. I think the forefront of AI capabilities research is currently quite focused on RL, which is an inherently more dangerous technology; part of what I hope to illustrate here is that there is low-hanging capability fruit in other directions.
When you ask, what answers?
Base models are the best, most advanced statistical models humans have ever created. However, we don't use them that way. Instead, we use them as weight initializations for training chatbots. The statistical integrity is compromised [...]
---
Outline:
(01:14) When you ask, what answers?
(04:55) Partially Labeled Data
(07:32) Invertibility
(11:57) Conditioning
(14:41) Transitivity
(16:00) Entropy Preservation
(21:59) Self-Knowledge
(23:08) Paraphrase Invariance
(28:53) What about chat?
(29:31) What about safety?
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

113,081 Listeners

132 Listeners

7,271 Listeners

530 Listeners

16,299 Listeners

4 Listeners

14 Listeners

2 Listeners