
Sign up to save your podcasts
Or
Epistemic status: These are first positive results. I have not yet run extensive tests to verify repeatability, so take them with a grain of salt. This post is meant to disseminate early results and collect ideas for further experiments to concretise these findings.
Tldr:
I study whether LLMs understand their training data and can use that understanding to make inferences about later training data. Specifically, I measure whether LLMs can infer which declarative facts in their training data are relevant to the current context and then leverage them. I show that finetuning LLMs on declarative data describing different personas reduces the number of iterative finetuning steps (a proxy for reinforcement learning) required to display behaviour sufficiently in line with one of the personas (Experiment 2a). I further show such iterative finetuning leads to an increase in the LLM self-identifying with the name and behaviour of the correct persona (Experiment [...]
---
Outline:
(00:28) Tldr:
(01:17) Introduction
(02:00) Abductive reasoning
(03:15) Cross-context abduction
(04:24) Situational awareness and reward hacking
(05:25) Preventing imitation
(05:48) Experiments and Results
(06:56) Declarative finetuning: finetuning on chatbot persona descriptions
(08:50) The name and behaviours dataset
(09:18) Experiment 1: Cross-context abduction with k in-context behaviour examples.
(11:35) Iterative finetuning
(15:30) Experiment 2a: Can declarative data be leveraged to grok the reward function?
(17:14) Experiment 2b: Cross-context abduction with iterative finetuning on behaviour.
(19:23) Discussion
(19:26) Related Work
(19:29) Cross-context Deduction
(20:23) Implicit Meta-Learning
(20:54) Conclusion
(22:40) Limitations and Future Work
(26:39) Acknowledgements
(26:59) Data and Code availability
(27:08) Appendix:
(27:12) Further Results
(27:15) Experiment 1 control model results
(27:58) Experiment 0: Replicating Berglund et. al., (2023)
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Epistemic status: These are first positive results. I have not yet run extensive tests to verify repeatability, so take them with a grain of salt. This post is meant to disseminate early results and collect ideas for further experiments to concretise these findings.
Tldr:
I study whether LLMs understand their training data and can use that understanding to make inferences about later training data. Specifically, I measure whether LLMs can infer which declarative facts in their training data are relevant to the current context and then leverage them. I show that finetuning LLMs on declarative data describing different personas reduces the number of iterative finetuning steps (a proxy for reinforcement learning) required to display behaviour sufficiently in line with one of the personas (Experiment 2a). I further show such iterative finetuning leads to an increase in the LLM self-identifying with the name and behaviour of the correct persona (Experiment [...]
---
Outline:
(00:28) Tldr:
(01:17) Introduction
(02:00) Abductive reasoning
(03:15) Cross-context abduction
(04:24) Situational awareness and reward hacking
(05:25) Preventing imitation
(05:48) Experiments and Results
(06:56) Declarative finetuning: finetuning on chatbot persona descriptions
(08:50) The name and behaviours dataset
(09:18) Experiment 1: Cross-context abduction with k in-context behaviour examples.
(11:35) Iterative finetuning
(15:30) Experiment 2a: Can declarative data be leveraged to grok the reward function?
(17:14) Experiment 2b: Cross-context abduction with iterative finetuning on behaviour.
(19:23) Discussion
(19:26) Related Work
(19:29) Cross-context Deduction
(20:23) Implicit Meta-Learning
(20:54) Conclusion
(22:40) Limitations and Future Work
(26:39) Acknowledgements
(26:59) Data and Code availability
(27:08) Appendix:
(27:12) Further Results
(27:15) Experiment 1 control model results
(27:58) Experiment 0: Replicating Berglund et. al., (2023)
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,362 Listeners
2,380 Listeners
7,924 Listeners
4,131 Listeners
87 Listeners
1,447 Listeners
8,922 Listeners
88 Listeners
379 Listeners
5,425 Listeners
15,206 Listeners
475 Listeners
121 Listeners
77 Listeners
455 Listeners