Machine Learning Street Talk (MLST)

Jonas Hübotter (ETH) - Test Time Inference


Listen Later

Jonas Hübotter, PhD student at ETH Zurich's Institute for Machine Learning, discusses his groundbreaking research on test-time computation and local learning. He demonstrates how smaller models can outperform larger ones by 30x through strategic test-time computation and introduces a novel paradigm combining inductive and transductive learning approaches.


Using Bayesian linear regression as a surrogate model for uncertainty estimation, Jonas explains how models can efficiently adapt to specific tasks without massive pre-training. He draws an analogy to Google Earth's variable resolution system to illustrate dynamic resource allocation based on task complexity.


The conversation explores the future of AI architecture, envisioning systems that continuously learn and adapt beyond current monolithic models. Jonas concludes by proposing hybrid deployment strategies combining local and cloud computation, suggesting a future where compute resources are allocated based on task complexity rather than fixed model size.


This research represents a significant shift in machine learning, prioritizing intelligent resource allocation and adaptive learning over traditional scaling approaches.


SPONSOR MESSAGES:

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/


Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/


Transcription, references and show notes PDF download:

https://www.dropbox.com/scl/fi/cxg80p388snwt6qbp4m52/JonasFinal.pdf?rlkey=glk9mhpzjvesanlc14rtpvk4r&st=6qwi8n3x&dl=0


Jonas Hübotter

https://jonhue.github.io/

https://scholar.google.com/citations?user=pxi_RkwAAAAJ


Transductive Active Learning: Theory and Applications (NeurIPS 2024)

https://arxiv.org/pdf/2402.15898


EFFICIENTLY LEARNING AT TEST-TIME: ACTIVE FINE-TUNING OF LLMS (SIFT)

https://arxiv.org/pdf/2410.08020


TOC:

1. Test-Time Computation Fundamentals

[00:00:00] Intro

[00:03:10] 1.1 Test-Time Computation and Model Performance Comparison

[00:05:52] 1.2 Retrieval Augmentation and Machine Teaching Strategies

[00:09:40] 1.3 In-Context Learning vs Fine-Tuning Trade-offs


2. System Architecture and Intelligence

[00:15:58] 2.1 System Architecture and Intelligence Emergence

[00:23:22] 2.2 Active Inference and Constrained Agency in AI

[00:29:52] 2.3 Evolution of Local Learning Methods

[00:32:05] 2.4 Vapnik's Contributions to Transductive Learning


3. Resource Optimization and Local Learning

[00:34:35] 3.1 Computational Resource Allocation in ML Models

[00:35:30] 3.2 Historical Context and Traditional ML Optimization

[00:37:55] 3.3 Variable Resolution Processing and Active Inference in ML

[00:43:01] 3.4 Local Learning and Base Model Capacity Trade-offs

[00:48:04] 3.5 Active Learning vs Local Learning Approaches


4. Information Retrieval and Model Interpretability

[00:51:08] 4.1 Information Retrieval and Nearest Neighbor Limitations

[01:03:07] 4.2 Model Interpretability and Surrogate Models

[01:15:03] 4.3 Bayesian Uncertainty Estimation and Surrogate Models


5. Distributed Systems and Deployment

[01:23:56] 5.1 Memory Architecture and Controller Systems

[01:28:14] 5.2 Evolution from Static to Distributed Learning Systems

[01:38:03] 5.3 Transductive Learning and Model Specialization

[01:41:58] 5.4 Hybrid Local-Cloud Deployment Strategies

...more
View all episodesView all episodes
Download on the App Store

Machine Learning Street Talk (MLST)By Machine Learning Street Talk (MLST)

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

85 ratings


More shows like Machine Learning Street Talk (MLST)

View all
Data Skeptic by Kyle Polich

Data Skeptic

481 Listeners

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) by Sam Charrington

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

441 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

298 Listeners

Practical AI by Practical AI LLC

Practical AI

192 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

198 Listeners

Last Week in AI by Skynet Today

Last Week in AI

287 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

426 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

121 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

201 Listeners

Unsupervised Learning by by Redpoint Ventures

Unsupervised Learning

50 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

75 Listeners

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

491 Listeners

AI + a16z by a16z

AI + a16z

31 Listeners

Lightcone Podcast by Y Combinator

Lightcone Podcast

22 Listeners

Training Data by Sequoia Capital

Training Data

43 Listeners