
Sign up to save your podcasts
Or


From building Medal into a 12M-user game clipping platform with 3.8B highlight moments to turning down a reported $500M offer from OpenAI (https://www.theinformation.com/articles/openai-offered-pay-500-million-startup-videogame-data) and raising a $134M seed from Khosla (https://techcrunch.com/2025/10/16/general-intuition-lands-134m-seed-to-teach-agents-spatial-reasoning-using-video-game-clips/) to spin out General Intuition, Pim is betting that world models trained on peak human gameplay are the next frontier after LLMs.
We sat down with Pim to dig into why game highlights are “episodic memory for simulation” (and how Medal’s privacy-first action labels became a world-model goldmine https://medal.tv/blog/posts/enabling-state-of-the-art-security-and-protections-on-medals-new-apm-and-controller-overlay-features), what it takes to build fully vision-based agents that just see frames and output actions in real time, how General Intuition transfers from games to real-world video and then into robotics, why world models and LLMs are complementary rather than rivals, what founders with proprietary datasets should know before selling or licensing to labs, and his bet that spatial-temporal foundation models will power 80% of future atoms-to-atoms interactions in both simulation and the real world.
We discuss:
How Medal’s 3.8B action-labeled highlight clips became a privacy-preserving goldmine for world models
Building fully vision-based agents that only see frames and output actions yet play like (and sometimes better than) humans
Transferring from arcade-style games to realistic games to real-world video using the same perception–action recipe
Why world models need actions, memory, and partial observability (smoke, occlusion, camera shake) vs. “just” pretty video generation
Distilling giant policies into tiny real-time models that still navigate, hide, and peek corners like real players
Pim’s path from RuneScape private servers, Tourette’s, and reverse engineering to leading a frontier world-model lab
How data-rich founders should think about valuing their datasets, negotiating with big labs, and deciding when to go independent
GI’s first customers: replacing brittle behavior trees in games, engines, and controller-based robots with a “frames in, actions out” API
Using Medal clips as “episodic memory of simulation” to move from imitation learning to RL via world models and negative events
The 2030 vision: spatial–temporal foundation models that power the majority of atoms-to-atoms interactions in simulation and the real world
—
Pim
X: https://x.com/PimDeWitte
LinkedIn: https://www.linkedin.com/in/pimdw/
Where to find Latent Space
X: https://x.com/latentspacepod
Substack: https://www.latent.space/
By swyx + Alessio4.7
8686 ratings
From building Medal into a 12M-user game clipping platform with 3.8B highlight moments to turning down a reported $500M offer from OpenAI (https://www.theinformation.com/articles/openai-offered-pay-500-million-startup-videogame-data) and raising a $134M seed from Khosla (https://techcrunch.com/2025/10/16/general-intuition-lands-134m-seed-to-teach-agents-spatial-reasoning-using-video-game-clips/) to spin out General Intuition, Pim is betting that world models trained on peak human gameplay are the next frontier after LLMs.
We sat down with Pim to dig into why game highlights are “episodic memory for simulation” (and how Medal’s privacy-first action labels became a world-model goldmine https://medal.tv/blog/posts/enabling-state-of-the-art-security-and-protections-on-medals-new-apm-and-controller-overlay-features), what it takes to build fully vision-based agents that just see frames and output actions in real time, how General Intuition transfers from games to real-world video and then into robotics, why world models and LLMs are complementary rather than rivals, what founders with proprietary datasets should know before selling or licensing to labs, and his bet that spatial-temporal foundation models will power 80% of future atoms-to-atoms interactions in both simulation and the real world.
We discuss:
How Medal’s 3.8B action-labeled highlight clips became a privacy-preserving goldmine for world models
Building fully vision-based agents that only see frames and output actions yet play like (and sometimes better than) humans
Transferring from arcade-style games to realistic games to real-world video using the same perception–action recipe
Why world models need actions, memory, and partial observability (smoke, occlusion, camera shake) vs. “just” pretty video generation
Distilling giant policies into tiny real-time models that still navigate, hide, and peek corners like real players
Pim’s path from RuneScape private servers, Tourette’s, and reverse engineering to leading a frontier world-model lab
How data-rich founders should think about valuing their datasets, negotiating with big labs, and deciding when to go independent
GI’s first customers: replacing brittle behavior trees in games, engines, and controller-based robots with a “frames in, actions out” API
Using Medal clips as “episodic memory of simulation” to move from imitation learning to RL via world models and negative events
The 2030 vision: spatial–temporal foundation models that power the majority of atoms-to-atoms interactions in simulation and the real world
—
Pim
X: https://x.com/PimDeWitte
LinkedIn: https://www.linkedin.com/in/pimdw/
Where to find Latent Space
X: https://x.com/latentspacepod
Substack: https://www.latent.space/

533 Listeners

290 Listeners

1,092 Listeners

302 Listeners

332 Listeners

228 Listeners

205 Listeners

205 Listeners

515 Listeners

131 Listeners

228 Listeners

622 Listeners

471 Listeners

23 Listeners

39 Listeners