
Sign up to save your podcasts
Or


Audio note: this article contains 182 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.
I (Subhash) am a Masters student in the Tegmark AI Safety Lab at MIT. I am interested in recruiting for full time roles this Spring - please reach out if you're interested in working together!
TLDR
This blog post accompanies the paper "Language Models Use Trigonometry to Do Addition." Key findings:
---
Outline:
(00:30) TLDR
(01:28) Motivation and Problem Setting
(02:18) LLMs Represent Numbers on a Helix
(02:23) Investigating the Structure of Numbers
(02:48) Periodicity
(03:49) Linearity
(04:39) Parameterizing Numbers as a Helix
(05:39) Fitting a Helix
(07:12) Evaluating the Helical Fit
(08:57) Relation to the Linear Representation Hypothesis
(10:28) Is the helix the full story?
(12:15) LLMs Use the Clock Algorithm to Compute Addition
(14:23) Understanding MLPs
(16:24) Zooming in on Neurons
(17:07) Modeling Neuron Preactivations
(18:53) Understanding MLP Inputs
(20:34) Interpreting Model Errors
(24:11) Limitations
(25:49) Conclusion
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrong
Audio note: this article contains 182 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.
I (Subhash) am a Masters student in the Tegmark AI Safety Lab at MIT. I am interested in recruiting for full time roles this Spring - please reach out if you're interested in working together!
TLDR
This blog post accompanies the paper "Language Models Use Trigonometry to Do Addition." Key findings:
---
Outline:
(00:30) TLDR
(01:28) Motivation and Problem Setting
(02:18) LLMs Represent Numbers on a Helix
(02:23) Investigating the Structure of Numbers
(02:48) Periodicity
(03:49) Linearity
(04:39) Parameterizing Numbers as a Helix
(05:39) Fitting a Helix
(07:12) Evaluating the Helical Fit
(08:57) Relation to the Linear Representation Hypothesis
(10:28) Is the helix the full story?
(12:15) LLMs Use the Clock Algorithm to Compute Addition
(14:23) Understanding MLPs
(16:24) Zooming in on Neurons
(17:07) Modeling Neuron Preactivations
(18:53) Understanding MLP Inputs
(20:34) Interpreting Model Errors
(24:11) Limitations
(25:49) Conclusion
The original text contained 9 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,337 Listeners

2,442 Listeners

9,188 Listeners

4,152 Listeners

92 Listeners

1,603 Listeners

9,899 Listeners

95 Listeners

502 Listeners

5,470 Listeners

16,097 Listeners

539 Listeners

133 Listeners

95 Listeners

514 Listeners