Learning Bayesian Statistics

#90, Demystifying MCMC & Variational Inference, with Charles Margossian


Listen Later

Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!

  • My Intuitive Bayes Online Courses
  • 1:1 Mentorship with me

What’s the difference between MCMC and Variational Inference (VI)? Why is MCMC called an approximate method? When should we use VI instead of MCMC?

These are some of the captivating (and practical) questions we’ll tackle in this episode. I had the chance to interview Charles Margossian, a research fellow in computational mathematics at the Flatiron Institute, and a core developer of the Stan software.

Charles was born and raised in Paris, and then moved to the US to pursue a bachelor’s degree in physics at Yale university. After graduating, he worked for two years in biotech, and went on to do a PhD in statistics at Columbia University with someone named… Andrew Gelman — you may have heard of him.

Charles is also specialized in pharmacometrics and epidemiology, so we also talked about some practical applications of Bayesian methods and algorithms in these fascinating fields.

Oh, and Charles’ life doesn’t only revolve around computers: he practices ballroom dancing and pickup soccer, and used to do improvised musical comedy!

Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !

Thank you to my Patrons for making this episode possible!

Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Trey Causey, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar and Matt Rosinski.

Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)

Links from the show:

  • Charles’ website: https://charlesm93.github.io/
  • Charles on Twitter: https://twitter.com/charlesm993
  • Charles on GitHub: https://github.com/charlesm93
  • Charles on Google Scholar: https://scholar.google.com/citations?user=nPtLsvIAAAAJ&hl=en
  • Stan software: https://mc-stan.org/
  • Torsten – Applications of Stan in Pharmacometrics: https://github.com/metrumresearchgroup/Torsten
  • R̂ – Assessing the convergence of Markov chain Monte Carlo when running many short chains: https://arxiv.org/abs/2110.13017
  • Revisiting the Gelman-Rubin Diagnostic: https://arxiv.org/abs/1812.09384
  • An importance sampling approach for reliable and efficient inference in Bayesian ordinary differential equation models: https://arxiv.org/abs/2205.09059
  • Pathfinder – Parallel quasi-Newton variational inference: https://arxiv.org/pdf/2108.03782.pdf
  • Bayesian workflow for disease transmission modeling in Stan: https://mc-stan.org/users/documentation/case-studies/boarding_school_case_study.html
  • LBS #76 – The Past, Present & Future of Stan, with Bob Carpenter: https://learnbayesstats.com/episode/76-past-present-future-of-stan-bob-carpenter/
  • LBS #51 – Bernoulli’s Fallacy & the Crisis of Modern Science, with Aubrey Clayton: https://learnbayesstats.com/episode/51-bernoullis-fallacy-crisis-modern-science-aubrey-clayton/
  • Flatiron Institute: https://www.simonsfoundation.org/flatiron/
  • Simons Foundation: https://www.simonsfoundation.org/


Abstract

by Christoph Bamberg

In episode 90 we cover both methodological advances and their application, namely variational inference and MCMC sampling and their application in pharmacometrics. 

And we have just the right guest for this topic - Charles Margossian! You might know Charles from his work on STAN, his workshop teaching or his work at his current position at the Flatiron Institute.

His main focus now is on two topics: variational inference and MCMC sampling. When is variational inference (or approximate Bayesian methods) appropriate? And when does it fail? Charles answers these questions convincingly, clearing up some discussion around this topic.

In his work on MCMC, he tries to answer some fundamental questions: How much computational power should we invest? When is MCMC sampling more appropriate than approximate Bayesian methods? The short answer: when you care about quantifying uncertainty. We even talk about what the R-hat measure means and how to improve on it with nested R-hats.

After covering these two topics, we move to his practical work: pharmacometrics. For example, he worked on modelling the speed of drugs dissolving in the body or the role of genetics in the workings of drugs. 

Charles also contributes to making Bayesian methods more accessible for pharmacologists: He co-developed the Torsten library for Stan that facilitates Bayesian analysis with pharmacometric data. 

We discuss the nature of pharmacometric data and how it is usually modelled with Ordinary Differential Equations. 

In the end we briefly cover one practical example of pharmacometric modelling: the Covid-19 pandemic.

All in all, episode 90 is another detailed one, covering many state-of-the-art techniques and their application.


Transcript

This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

...more
View all episodesView all episodes
Download on the App Store

Learning Bayesian StatisticsBy Alexandre Andorra

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

62 ratings


More shows like Learning Bayesian Statistics

View all
Data Skeptic by Kyle Polich

Data Skeptic

474 Listeners

Talk Python To Me by Michael Kennedy

Talk Python To Me

585 Listeners

Quanta Science Podcast by Quanta Magazine

Quanta Science Podcast

450 Listeners

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) by Sam Charrington

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

429 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

295 Listeners

Python Bytes by Michael Kennedy and Brian Okken

Python Bytes

212 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

322 Listeners

Data Engineering Podcast by Tobias Macey

Data Engineering Podcast

142 Listeners

DataFramed by DataCamp

DataFramed

267 Listeners

The Numberphile Podcast by Brady Haran

The Numberphile Podcast

445 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

190 Listeners

COMPLEXITY by Santa Fe Institute

COMPLEXITY

279 Listeners

The Real Python Podcast by Real Python

The Real Python Podcast

136 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

64 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

101 Listeners