Is speed the new superpower for AI coders? In today’s episode, Hunter and Riley dive into OpenAI’s lightning-quick GPT-5.3-Codex-Spark release. This model, now available in the Codex app, CLI, and VS Code, streams over a thousand tokens per second—meaning it all but eliminates the dreaded waiting game for creators and developers. The duo explores why latency isn’t just a technical detail, but an actual tax on creative momentum and productivity. They break down how Spark fits into real workflows: rapid front-end iterations, “live demo” meetings, glue code for quick automation, and those endless design tweaks that keep creator businesses running. But is “faster” always better? The hosts debate Spark’s role as your speedy implementation buddy, not your strategic reviewer—and highlight why having a human in the loop, safety defaults, and transparent version control are essential. Plus, they compare Spark’s speed-first philosophy to Google’s Gemini 3.1 Pro, which focuses on reliability and multi-step reasoning. To round out the news, hear about Google’s new music-making Lyria 3 in the Gemini app and its advances in audio watermarking. Whether you code a little or a lot, this episode unpacks how AI’s race for speed might change what and how you build next.