
Sign up to save your podcasts
Or


ML models can perform a range of tasks and subtasks, some of which are more closely related to one another than are others. In this post, we set out two very initial starting points. First, we motivate reverse engineering models’ task decompositions. We think this can be helpful for interpretability and for understanding generalization. Second, we provide a (potentially non-exhaustive, initial) list of techniques that could be used to quantify the ‘distance’ between two tasks or inputs. We hope these distances might help us identify the task decomposition of a particular model. We close by briefly considering analogues in humans and by suggesting a toy model.
Epistemic status: We didn’t spend much time writing this post. Please let us know in the comments if you have other ideas for measuring task distance or if we are replicating work.
It might be useful to think about [...]
---
Outline:
(02:03) Why understanding task structure could be useful
(02:08) Interpretability
(03:05) Learning the abstractions
(03:47) Unlearning capabilities
(04:46) Quantifying generalization
(05:58) Learning how the world works
(06:22) Some Subtleties
(06:26) What is a task?
(07:59) Task decomposition in the dataset vs a particular system's task decomposition
(08:47) Absolute vs relative metrics vs clusterings
(09:48) Methods for gauging task structure in ML
(09:57) Inspecting activations
(13:17) Inspecting learning
(15:56) Inspecting weights
(17:52) Analogues in humans
(18:57) A toy model for testing task decomposition techniques
(21:50) Acknowledgements
The original text contained 17 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongML models can perform a range of tasks and subtasks, some of which are more closely related to one another than are others. In this post, we set out two very initial starting points. First, we motivate reverse engineering models’ task decompositions. We think this can be helpful for interpretability and for understanding generalization. Second, we provide a (potentially non-exhaustive, initial) list of techniques that could be used to quantify the ‘distance’ between two tasks or inputs. We hope these distances might help us identify the task decomposition of a particular model. We close by briefly considering analogues in humans and by suggesting a toy model.
Epistemic status: We didn’t spend much time writing this post. Please let us know in the comments if you have other ideas for measuring task distance or if we are replicating work.
It might be useful to think about [...]
---
Outline:
(02:03) Why understanding task structure could be useful
(02:08) Interpretability
(03:05) Learning the abstractions
(03:47) Unlearning capabilities
(04:46) Quantifying generalization
(05:58) Learning how the world works
(06:22) Some Subtleties
(06:26) What is a task?
(07:59) Task decomposition in the dataset vs a particular system's task decomposition
(08:47) Absolute vs relative metrics vs clusterings
(09:48) Methods for gauging task structure in ML
(09:57) Inspecting activations
(13:17) Inspecting learning
(15:56) Inspecting weights
(17:52) Analogues in humans
(18:57) A toy model for testing task decomposition techniques
(21:50) Acknowledgements
The original text contained 17 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.

113,026 Listeners

132 Listeners

7,266 Listeners

560 Listeners

16,495 Listeners

4 Listeners

14 Listeners

2 Listeners