In 2025, the AI race quietly split in two: one for building the smartest model, and another for getting everyone to use yours. Chinese labs chose the second race — and the data says they're winning. Dale and Nick break down how DeepSeek, Alibaba, and Kimi captured developers, startups, and soon entire education systems by being cheaper, open, and good enough. They examine why Airbnb ditched ChatGPT for Qwen, why 80% of startups pitching A16z are building on Chinese open-source models, and what this means for universities still teaching AI literacy through a single-tool lens. The conversation covers safety trade-offs, the equity problem of premium vs. free models, and why prompt engineering alone is already a relic.
Timestamps:
[00:00] — Nick sets the scene: DeepSeek's $6M model vs OpenAI's $100M spend
[00:48] — The two AI races: building the best model vs. winning adoption
[02:33] — 80% of A16z-backed startups now building on Chinese models
[04:13] — Dale's experience bargaining with Kimi's onboarding for a $0.99 subscription
[05:08] — Percy Liang on why open-weight models drive faster adoption
[06:00] — Apple choosing Gemini for Siri and what distribution beats benchmarks looks like
[06:20] — OpenAI's precarious position: prediction markets give them 10% odds for top model by end of 2026
[08:11] — China's national mandate: eight hours of AI education for every student, annually
[09:19] — Estonia's similar move with mandatory AI training for teachers
[10:57] — OpenAI's halfhearted pivot to open-source with GPT OS, and Meta retreating on Llama openness
[12:30] — Predatory pricing patterns from Uber to Netflix — and why institutions should pay attention
[14:22] — Beijing's chip exodus: ByteDance and Alibaba abandoning Nvidia for Huawei
[14:51] — Switzerland's sovereign AI model as a third path beyond the US-China binary
[16:32] — Ambient intelligence and the "good enough" vending machine that talks to you
[17:07] — AI safety scores: DeepSeek and Alibaba Cloud both scored D/D-minus on existential safety
[18:56] — Anthropic's Claude jailbroken for Chinese state-sponsored cyber espionage
[19:20] — The equity problem: do we shame cash-strapped institutions into premium licensing?
[20:55] — Dale's call for transparency: share failures and findings, don't hoard them
[22:24] — The classroom reality: students trained on ChatGPT will graduate into Chinese AI infrastructure
[23:22] — Dale's pitch for model comparison tools — seeing outputs side-by-side
[25:10] — Both hosts on using multiple models: Claude, Gemini, and the "council of experts" approach
[27:11] — Stop teaching tools, start building human judgment about AI infrastructure choices
[28:14] — Prompt engineering as table stakes: why AI fluency in 2026 means understanding infrastructure
🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache.
Every episode:
• Real tests of AI tools in education and professional workflows
• Fast, Monday-morning actions you can actually try
• Clear signal through the noise (no hype, no jargon)
👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify]
👉 Share this with a colleague who still says “I’ll figure AI out later”
👉 Join the conversation on LinkedIn with #AdjunctIntelligence
Stay curious. Stay intelligent. Stay the human in the loop.