muckrAIkers

Understanding AI World Models w/ Chris Canal


Listen Later

Chris Canal, co-founder of EquiStamp, joins muckrAIkers as our first ever podcast guest! In this ~3.5 hour interview, we discuss intelligence vs. competencies, the importance of test-time compute, moving goalposts, the orthogonality thesis, and much more.

A seasoned software developer, Chris started EquiStamp as a way to improve our current understanding of model failure modes and capabilities in late 2023. Now a key contractor for METR, EquiStamp evaluates the next generation of LLMs from frontier model developers like OpenAI and Anthropic.

EquiStamp is hiring, so if you're a software developer interested in a fully remote opportunity with flexible working hours, join the EquiStamp Discord server and message Chris directly; oh, and let him know muckrAIkers sent you!


  • (00:00) - Recording date
  • (00:05) - Intro
  • (00:29) - Hot off the press
  • (02:17) - Introducing Chris Canal
  • (19:12) - World/risk models
  • (35:21) - Competencies + decision making power
  • (42:09) - Breaking models down
  • (01:05:06) - Timelines, test time compute
  • (01:19:17) - Moving goalposts
  • (01:26:34) - Risk management pre-AGI
  • (01:46:32) - Happy endings
  • (01:55:50) - Causal chains
  • (02:04:49) - Appetite for democracy
  • (02:20:06) - Tech-frame based fallacies
  • (02:39:56) - Bringing back real capitalism
  • (02:45:23) - Orthogonality Thesis
  • (03:04:31) - Why we do this
  • (03:15:36) - Equistamp!

  • Links

    • EquiStamp
    • Chris's Twitter
    • METR Paper - RE-Bench: Evaluating frontier AI R&D capabilities of language model agents against human experts
    • All Trades article - Learning from History: Preventing AGI Existential Risks through Policy by Chris Canal
    • Better Systems article - The Omega Protocol: Another Manhattan Project

    Superintelligence & Commentary

    • Wikipedia article - Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
    • Reflective Altruism article - Against the singularity hypothesis (Part 5: Bostrom on the singularity)
    • Into AI Safety Interview - Scaling Democracy w/ Dr. Igor Krawczuk

    Referenced Sources

    • Book - Man-made Catastrophes and Risk Information Concealment: Case Studies of Major Disasters and Human Fallibility
    • Artificial Intelligence Paper - Reward is Enough
    • Wikipedia article - Capital and Ideology by Thomas Piketty
    • Wikipedia article - Pantheon

    LeCun on AGI

    • "Won't Happen" - Time article - Meta’s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk
    • "But if it does, it'll be my research agenda latent state models, which I happen to research" - Meta Platforms Blogpost - I-JEPA: The first AI model based on Yann LeCun’s vision for more human-like AI

    Other Sources

    • Stanford CS Senior Project - Timing Attacks on Prompt Caching in Language Model APIs
    • TechCrunch article - AI researcher François Chollet founds a new AI lab focused on AGI
    • White House Fact Sheet - Ensuring U.S. Security and Economic Strength in the Age of Artificial Intelligence
    • New York Post article - Bay Area lawyer drops Meta as client over CEO Mark Zuckerberg’s ‘toxic masculinity and Neo-Nazi madness’
    • OpenEdition Academic Review of Thomas Piketty
    • Neural Processing Letters Paper - A Survey of Encoding Techniques for Signal Processing in Spiking Neural Networks
    • BFI Working Paper - Do Financial Concerns Make Workers Less Productive?
    • No Mercy/No Malice article - How to Survive the Next Four Years by Scott Galloway
    ...more
    View all episodesView all episodes
    Download on the App Store

    muckrAIkersBy Jacob Haimes and Igor Krawczuk