Machine Learning Street Talk (MLST)

#101 DR. WALID SABA - Extrapolation, Compositionality and Learnability

02.10.2023 - By Machine Learning Street Talk (MLST)Play

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

MLST Discord! https://discord.gg/aNPkGUQtc5

Patreon: https://www.patreon.com/mlst

YT: https://youtu.be/snUf_LIfQII

We had a discussion with Dr. Walid Saba about whether or not MLP neural networks can extrapolate outside of the training support, and what it means to extrapolate in a vector space. Then we discussed the concept of vagueness in cognitive science, for example, what does it mean to be "rich" or what is a "pile of sand"? Finally we discussed behaviourism and the reward is enough hypothesis.

References:

A Spline Theory of Deep Networks [Balestriero]

https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf

The animation we showed of the spline theory was created by Ahmed Imtiaz Humayun (https://twitter.com/imtiazprio) and we will be showing an interview with Imtiaz and Randall very soon!

[00:00:00] Intro

[00:00:58] Interpolation vs Extrapolation

[00:24:38] Type 1 Type 2 generalisation and compositionality / Fodor / Systematicity

[00:32:18] Keith's brain teaser

[00:36:53] Neural turing machines / discrete vs continuous / learnability

More episodes from Machine Learning Street Talk (MLST)