Machine Learning Street Talk (MLST)

#92 - SARA HOOKER - Fairness, Interpretability, Language Models


Listen Later

Support us! https://www.patreon.com/mlst

Sara Hooker is an exceptionally talented and accomplished leader and research scientist in the field of machine learning. She is the founder of Cohere For AI, a non-profit research lab that seeks to solve complex machine learning problems. She is passionate about creating more points of entry into machine learning research and has dedicated her efforts to understanding how progress in this field can be translated into reliable and accessible machine learning in the real-world.

Sara is also the co-founder of the Trustworthy ML Initiative, a forum and seminar series related to Trustworthy ML. She is on the advisory board of Patterns and is an active member of the MLC research group, which has a focus on making participation in machine learning research more accessible.

Before starting Cohere For AI, Sara worked as a research scientist at Google Brain. She has written several influential research papers, including "The Hardware Lottery", "The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation", "Moving Beyond “Algorithmic Bias is a Data Problem”" and "Characterizing and Mitigating Bias in Compact Models". 

In addition to her research work, Sara is also the founder of the local Bay Area non-profit Delta Analytics, which works with non-profits and communities all over the world to build technical capacity and empower others to use data. She regularly gives tutorials on machine learning fundamentals, interpretability, model compression and deep neural networks and is dedicated to collaborating with independent researchers around the world.

Sara Hooker is famous for writing a paper introducing the concept of the 'hardware lottery', in which the success of a research idea is determined not by its inherent superiority, but by its compatibility with available software and hardware. She argued that choices about software and hardware have had a substantial impact in deciding the outcomes of early computer science history, and that with the increasing heterogeneity of the hardware landscape, gains from advances in computing may become increasingly disparate. Sara proposed that an interim goal should be to create better feedback mechanisms for researchers to understand how their algorithms interact with the hardware they use. She suggested that domain-specific languages, auto-tuning of algorithmic parameters, and better profiling tools may help to alleviate this issue, as well as provide researchers with more informed opinions about how hardware and software should progress. Ultimately, Sara encouraged researchers to be mindful of the implications of the hardware lottery, as it could mean that progress on some research directions is further obstructed. If you want to learn more about that paper, watch our previous interview with Sara.

YT version: https://youtu.be/7oJui4eSCoY

MLST Discord: https://discord.gg/aNPkGUQtc5

TOC:

[00:00:00] Intro

[00:02:53] Interpretability / Fairness

[00:35:29] LLMs


Find Sara:

https://www.sarahooker.me/

https://twitter.com/sarahookr

...more
View all episodesView all episodes
Download on the App Store

Machine Learning Street Talk (MLST)By Machine Learning Street Talk (MLST)

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

83 ratings


More shows like Machine Learning Street Talk (MLST)

View all
Data Skeptic by Kyle Polich

Data Skeptic

470 Listeners

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) by Sam Charrington

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

434 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

296 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

324 Listeners

Practical AI by Practical AI LLC

Practical AI

190 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

200 Listeners

Last Week in AI by Skynet Today

Last Week in AI

282 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

352 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

125 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

190 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

63 Listeners

"Upstream" with Erik Torenberg by Erik Torenberg

"Upstream" with Erik Torenberg

64 Listeners

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

422 Listeners

AI + a16z by a16z

AI + a16z

33 Listeners

Training Data by Sequoia Capital

Training Data

36 Listeners