Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Remarks on the Convergence in Distribution of Random Neural Networks to Gaussian Processes in the Infinite Width Limit, published by Spencer Becker-Kahn on November 30, 2023 on The AI Alignment Forum.
The linked note is something I "noticed" while going through different versions of this result in the literature. I think that this sort of mathematical work on neural networks is worthwhile and worth doing to a high standard but I have no reason to think that this particular work is of much consequence beyond filling in a gap in the literature. It's the kind of nonsense that someone who has done too much measure theory would think about.
Abstract. We describe a direct proof of yet another version of the result that a sequence of fully-connected neural networks converges to a Gaussian process in the infinite-width limit. The convergence in distribution that we establish is the weak convergence of probability measures on the non-separable, non
metrizable product space (Rd')Rd, i.e. the space of functions from Rd to Rd' with the topology whose convergent sequences correspond to
pointwise convergence. The result itself is already implied by a stronger such theorem due to Boris
Hanin, but the direct proof of our weaker result can afford to replace the more technical parts of
Hanin's proof that are needed to establish tightness with a shorter and more abstract measure-theoretic argument.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.