Saturday's Blue Lightning AI Daily dives into the big buzz: Google DeepMind has released the Gemma 4 family as true open-weight models, now with the Apache 2.0 license. Why is everyone so hyped about open weights? It means creators and developers finally get freedom to build, ship, and sell products without worrying about surprise license changes or access being restricted overnight. The Gemma models are powerful and flexible, spanning lightweight edge models that handle audio all the way to giant context, multimodal models capable of text, image, and video workflows. The edge models (Gemma-4-E2B, Gemma-4-E4B) even support local audio input for creators who want to keep their data on their device. The Mixture-of-Experts and dense flagship versions bring huge context windows for maintaining consistent project brains and tackling big collaborative projects. We also say farewell to GPT-4o, talk about what truly open means vs models that are "open-ish," and why creators should value tools they can keep running instead of renting through an API. Plus: Netflix open-sources VOID for video object removal, PixVerse V6 levels up ad workflows, and new chat-import features in Google Gemini help you keep your AI history searchable. The podcast wraps with practical tips on getting started with Gemma 4, model size considerations, and building for portability (and not getting too attached to any one API). Oh, and we spend a moment in chaos corner reviewing the Claude code leak, the rise of draft plus critic AI workflows, and one wild dog-mRNA vaccine story. If you’re a creator or builder, this episode is your guide to making the most of actual, for-real open models.