Machine Learning Street Talk (MLST)

#114 - Secrets of Deep Reinforcement Learning (Minqi Jiang)


Listen Later

Patreon: https://www.patreon.com/mlst

Discord: https://discord.gg/ESrGqhf5CB
Twitter: https://twitter.com/MLStreetTalk


In this exclusive interview, Dr. Tim Scarfe sits down with Minqi Jiang, a leading PhD student at University College London and Meta AI, as they delve into the fascinating world of deep reinforcement learning (RL) and its impact on technology, startups, and research. Discover how Minqi made the crucial decision to pursue a PhD in this exciting field, and learn from his valuable startup experiences and lessons.

Minqi shares his insights into balancing serendipity and planning in life and research, and explains the role of objectives and Goodhart's Law in decision-making. Get ready to explore the depths of robustness in RL, two-player zero-sum games, and the differences between RL and supervised learning.

As they discuss the role of environment in intelligence, emergence, and abstraction, prepare to be blown away by the possibilities of open-endedness and the intelligence explosion. Learn how language models generate their own training data, the limitations of RL, and the future of software 2.0 with interpretability concerns.

From robotics and open-ended learning applications to learning potential metrics and MDPs, this interview is a goldmine of information for anyone interested in AI, RL, and the cutting edge of technology. Don't miss out on this incredible opportunity to learn from a rising star in the AI world!

TOC

Tech & Startup Background [00:00:00]

Pursuing PhD in Deep RL [00:03:59]

Startup Lessons [00:11:33]

Serendipity vs Planning [00:12:30]

Objectives & Decision Making [00:19:19]

Minimax Regret & Uncertainty [00:22:57]

Robustness in RL & Zero-Sum Games [00:26:14]

RL vs Supervised Learning [00:34:04]

Exploration & Intelligence [00:41:27]

Environment, Emergence, Abstraction [00:46:31]

Open-endedness & Intelligence Explosion [00:54:28]

Language Models & Training Data [01:04:59]

RLHF & Language Models [01:16:37]

Creativity in Language Models [01:27:25]

Limitations of RL [01:40:58]

Software 2.0 & Interpretability [01:45:11]

Language Models & Code Reliability [01:48:23]

Robust Prioritized Level Replay [01:51:42]

Open-ended Learning [01:55:57]

Auto-curriculum & Deep RL [02:08:48]

Robotics & Open-ended Learning [02:31:05]

Learning Potential & MDPs [02:36:20]

Universal Function Space [02:42:02]

Goal-Directed Learning & Auto-Curricula [02:42:48]

Advice & Closing Thoughts [02:44:47]


References:

- Why Greatness Cannot Be Planned: The Myth of the Objective by Kenneth O. Stanley and Joel Lehman

https://www.springer.com/gp/book/9783319155234

- Rethinking Exploration: General Intelligence Requires Rethinking Exploration

https://arxiv.org/abs/2106.06860

- The Case for Strong Emergence (Sabine Hossenfelder)

https://arxiv.org/abs/2102.07740

- The Game of Life (Conway)

https://www.conwaylife.com/

- Toolformer: Teaching Language Models to Generate APIs (Meta AI)

https://arxiv.org/abs/2302.04761

- OpenAI's POET: Paired Open-Ended Trailblazer

https://arxiv.org/abs/1901.01753

- Schmidhuber's Artificial Curiosity

https://people.idsia.ch/~juergen/interest.html

- Gödel Machines

https://people.idsia.ch/~juergen/goedelmachine.html

- PowerPlay

https://arxiv.org/abs/1112.5309

- Robust Prioritized Level Replay: https://openreview.net/forum?id=NfZ6g2OmXEk

- Unsupervised Environment Design: https://arxiv.org/abs/2012.02096

- Excel: Evolving Curriculum Learning for Deep Reinforcement Learning

https://arxiv.org/abs/1901.05431

- Go-Explore: A New Approach for Hard-Exploration Problems

https://arxiv.org/abs/1901.10995

- Learning with AMIGo: Adversarially Motivated Intrinsic Goals

https://www.researchgate.net/publication/342377312_Learning_with_AMIGo_Adversarially_Motivated_Intrinsic_Goals


PRML

https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf

Sutton and Barto

https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf

...more
View all episodesView all episodes
Download on the App Store

Machine Learning Street Talk (MLST)By Machine Learning Street Talk (MLST)

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

83 ratings


More shows like Machine Learning Street Talk (MLST)

View all
Data Skeptic by Kyle Polich

Data Skeptic

470 Listeners

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) by Sam Charrington

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

434 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

296 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

324 Listeners

Practical AI by Practical AI LLC

Practical AI

190 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

201 Listeners

Last Week in AI by Skynet Today

Last Week in AI

281 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

354 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

125 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

190 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

63 Listeners

"Upstream" with Erik Torenberg by Erik Torenberg

"Upstream" with Erik Torenberg

64 Listeners

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

424 Listeners

AI + a16z by a16z

AI + a16z

33 Listeners

Training Data by Sequoia Capital

Training Data

36 Listeners