Machine Learning Street Talk (MLST)

#69 DR. THOMAS LUX - Interpolation of Sparse High-Dimensional Data


Listen Later

Today we are speaking with Dr. Thomas Lux, a research scientist at Meta in Silicon Valley. 


In some sense, all of supervised machine learning can be framed through the lens of geometry. All training data exists as points in euclidean space, and we want to predict the value of a function at all those points. Neural networks appear to be the modus operandi these days for many domains of prediction. In that light; we might ask ourselves — what makes neural networks better than classical techniques like K nearest neighbour from a geometric perspective. Our guest today has done research on exactly that problem, trying to define error bounds for approximations in terms of directions, distances, and derivatives.  


The insights from Thomas's work point at why neural networks are so good at problems which everything else fails at, like image recognition. The key is in their ability to ignore parts of the input space, do nonlinear dimension reduction, and concentrate their approximation power on important parts of the function. 


[00:00:00] Intro to Show

[00:04:11] Intro to Thomas (Main show kick off)

[00:04:56] Interpolation of Sparse High-Dimensional Data

[00:12:19] Where does one place the basis functions to partition the space, the perennial question

[00:16:20] The sampling phenomenon -- where did all those dimensions come from?

[00:17:40] The placement of the MLP basis functions, they are not where you think they are

[00:23:15] NNs only extrapolate when given explicit priors to do so, CNNs in the translation domain

[00:25:31] Transformers extrapolate in the permutation domain

[00:28:26] NN priors work by creating space junk everywhere

[00:36:44] Are vector spaces the way to go? On discrete problems

[00:40:23] Activation functioms

[00:45:57] What can we prove about NNs? Gradients without backprop


Interpolation of Sparse High-Dimensional Data [Lux]

https://tchlux.github.io/papers/tchlux-2020-NUMA.pdf


A Spline Theory of Deep Learning [_Balestriero_]

https://proceedings.mlr.press/v80/balestriero18b.html


Gradients without Backpropagation ‘22

https://arxiv.org/pdf/2202.08587.pdf

...more
View all episodesView all episodes
Download on the App Store

Machine Learning Street Talk (MLST)By Machine Learning Street Talk (MLST)

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

95 ratings


More shows like Machine Learning Street Talk (MLST)

View all
The a16z Show by Andreessen Horowitz

The a16z Show

1,105 Listeners

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) by Sam Charrington

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

441 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

305 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

343 Listeners

Practical AI by Practical AI LLC

Practical AI

209 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

205 Listeners

Last Week in AI by Skynet Today

Last Week in AI

314 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

551 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

513 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

146 Listeners

Latent Space: The AI Engineer Podcast by Latent.Space

Latent Space: The AI Engineer Podcast

102 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

228 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

685 Listeners

BG2Pod with Brad Gerstner and Bill Gurley by BG2Pod

BG2Pod with Brad Gerstner and Bill Gurley

475 Listeners

AI + a16z by a16z

AI + a16z

34 Listeners