
Sign up to save your podcasts
Or


I'm behind on a couple of posts I've been planning, but am trying to post something every day if possible. So today I'll post a cached fun piece on overinterpreting a random data phenomenon that's tricked me before.
Recall that a random walk or a "drunkard's walk" (as in the title) is a sequence of vectors _x_1, x_2, ldots_ in some _mathbb{R}^n_ such that each _x_k_ is obtained from _x_{k-1}_ by adding noise. Here is a picture of a 1D random walk as a function of time:
Weirdly satisfyingA random walk is the "null hypothesis" for any data with memory. If you are looking at some learning process that updates state to state with some degree of stochasticity, seeing a random walk means that your update steps are random and you're not in fact learning. If you graph some collection of activations from layer to layer of a transformer [...]
The original text contained 4 footnotes which were omitted from this narration.
The original text contained 1 image which was described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongI'm behind on a couple of posts I've been planning, but am trying to post something every day if possible. So today I'll post a cached fun piece on overinterpreting a random data phenomenon that's tricked me before.
Recall that a random walk or a "drunkard's walk" (as in the title) is a sequence of vectors _x_1, x_2, ldots_ in some _mathbb{R}^n_ such that each _x_k_ is obtained from _x_{k-1}_ by adding noise. Here is a picture of a 1D random walk as a function of time:
Weirdly satisfyingA random walk is the "null hypothesis" for any data with memory. If you are looking at some learning process that updates state to state with some degree of stochasticity, seeing a random walk means that your update steps are random and you're not in fact learning. If you graph some collection of activations from layer to layer of a transformer [...]
The original text contained 4 footnotes which were omitted from this narration.
The original text contained 1 image which was described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

112,586 Listeners

130 Listeners

7,219 Listeners

531 Listeners

16,096 Listeners

4 Listeners

14 Listeners

2 Listeners