Into AI Safety

Scaling AI Safety Through Mentorship w/ Dr. Ryan Kidd


Listen Later

What does it actually take to build a successful AI safety organization? I'm joined by Dr. Ryan Kidd, who has co-led MATS from a small pilot program to one of the field's premier talent pipelines. In this episode, he reveals the low-hanging fruit in AI safety field-building that most people are missing: the amplifier archetype.

I pushed Ryan on some hard questions, from balancing funder priorities and research independence, to building a robust selection process for both mentors and participants. Whether you're considering a career pivot into AI safety or already working in the field, this conversation offers practical advice on how to actually make an impact.

Chapters

  • (00:00) - - Intro
  • (08:16) - - Building MATS Post-FTX & Summer of Love
  • (13:09) - - Balancing Funder Priorities and Research Independence
  • (19:44) - - The MATS Selection Process
  • (33:15) - - Talent Archetypes in AI Safety
  • (50:22) - - Comparative Advantage and Career Capital in AI Safety
  • (01:04:35) - - Building the AI Safety Ecosystem
  • (01:15:28) - - What Makes a Great AI Safety Amplifier
  • (01:21:44) - - Lightning Round Questions
  • (01:30:30) - - Final Thoughts & Outro

  • Links
    • MATS

    Ryan's Writing

    • LessWrong post - Talent needs of technical AI safety teams
    • LessWrong post - AI safety undervalues founders
    • LessWrong comment - Comment permalink with 2025 MATS program details
    • LessWrong post - Talk: AI Safety Fieldbuilding at MATS
    • LessWrong post - MATS Mentor Selection
    • LessWrong post - Why I funded PIBBSS
    • EA Forum post - How MATS addresses mass movement building concerns

    FTX Funding of AI Safety

    • LessWrong blogpost - An Overview of the AI Safety Funding Situation
    • Fortune article - Why Sam Bankman-Fried’s FTX debacle is roiling A.I. research
    • NY Times article - FTX probes $6.5M in payments to AI safety group amid clawback crusade
    • Cointelegraph article - FTX probes $6.5M in payments to AI safety group amid clawback crusade
    • FTX Future Fund article - Future Fund June 2022 Update (archive)
    • Tracxn page - Anthropic Funding and Investors

    Training & Support Programs

    • Catalyze Impact
    • Seldon Lab
    • SPAR
    • BlueDot Impact
    • YCombinator
    • Pivotal
    • Athena
    • Astra Fellowship
    • Horizon Fellowship
    • BASE Fellowship
    • LASR Labs
    • Entrepeneur First

    Funding Organizations

    • Coefficient Giving (previously Open Philanthropy)
    • LTFF
    • Longview Philanthropy
    • Renaissance Philanthropy

    Coworking Spaces

    • LISA
    • Mox
    • Lighthaven
    • FAR Labs
    • Constellation
    • Collider
    • NET Office
    • BAISH

    Research Organizations & Startups

    • Atla AI
    • Apollo Research
    • Timaeus
    • RAND CAST
    • CHAI

    Other Sources

    • AXRP website - The AI X-risk Research Podcast
    • LessWrong blogpost - Shard Theory: An Overview
    ...more
    View all episodesView all episodes
    Download on the App Store

    Into AI SafetyBy Jacob Haimes