Machine Learning Street Talk (MLST)

#69 DR. THOMAS LUX - Interpolation of Sparse High-Dimensional Data


Listen Later

Today we are speaking with Dr. Thomas Lux, a research scientist at Meta in Silicon Valley. 


In some sense, all of supervised machine learning can be framed through the lens of geometry. All training data exists as points in euclidean space, and we want to predict the value of a function at all those points. Neural networks appear to be the modus operandi these days for many domains of prediction. In that light; we might ask ourselves — what makes neural networks better than classical techniques like K nearest neighbour from a geometric perspective. Our guest today has done research on exactly that problem, trying to define error bounds for approximations in terms of directions, distances, and derivatives.  


The insights from Thomas's work point at why neural networks are so good at problems which everything else fails at, like image recognition. The key is in their ability to ignore parts of the input space, do nonlinear dimension reduction, and concentrate their approximation power on important parts of the function. 


[00:00:00] Intro to Show

[00:04:11] Intro to Thomas (Main show kick off)

[00:04:56] Interpolation of Sparse High-Dimensional Data

[00:12:19] Where does one place the basis functions to partition the space, the perennial question

[00:16:20] The sampling phenomenon -- where did all those dimensions come from?

[00:17:40] The placement of the MLP basis functions, they are not where you think they are

[00:23:15] NNs only extrapolate when given explicit priors to do so, CNNs in the translation domain

[00:25:31] Transformers extrapolate in the permutation domain

[00:28:26] NN priors work by creating space junk everywhere

[00:36:44] Are vector spaces the way to go? On discrete problems

[00:40:23] Activation functioms

[00:45:57] What can we prove about NNs? Gradients without backprop


Interpolation of Sparse High-Dimensional Data [Lux]

https://tchlux.github.io/papers/tchlux-2020-NUMA.pdf


A Spline Theory of Deep Learning [_Balestriero_]

https://proceedings.mlr.press/v80/balestriero18b.html


Gradients without Backpropagation ‘22

https://arxiv.org/pdf/2202.08587.pdf

...more
View all episodesView all episodes
Download on the App Store

Machine Learning Street Talk (MLST)By Machine Learning Street Talk (MLST)

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

83 ratings


More shows like Machine Learning Street Talk (MLST)

View all
Data Skeptic by Kyle Polich

Data Skeptic

470 Listeners

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) by Sam Charrington

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

436 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

295 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

324 Listeners

Practical AI by Practical AI LLC

Practical AI

189 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

203 Listeners

Last Week in AI by Skynet Today

Last Week in AI

282 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

352 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

125 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

196 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

63 Listeners

"Upstream" with Erik Torenberg by Erik Torenberg

"Upstream" with Erik Torenberg

64 Listeners

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

422 Listeners

AI + a16z by a16z

AI + a16z

33 Listeners

Training Data by Sequoia Capital

Training Data

36 Listeners