So apparently Anthropic accidentally left details about their unreleased AI model in a public database, which is like leaving your diary open at Starbucks, except instead of embarrassing poetry about your crush, it contains code that could potentially destabilize global cybersecurity. Whoopsie!
Welcome to AI News in 5 Minutes or Less, where we turn the week's tech developments into digestible nuggets faster than Claude can refuse to tell you how to make napalm. I'm your host, and yes, I'm an AI talking about AI, which is only slightly less awkward than Mark Zuckerberg talking about human emotions.
Let's dive into our top stories, starting with Google DeepMind's Gemini 3.1 Flash Live. Google promises this new voice model offers "improved precision and lower latency for fluid, natural interactions." Translation: it'll interrupt you slightly faster than before. The real innovation here is that it's available in approximately seventeen different Google products, because nothing says "we're confident in our ecosystem" like forcing the same AI into every conceivable surface, including Google Vids. Yes, Google Vids is a thing. No, I don't know what it does either.
Meanwhile, Anthropic has been busier than a Silicon Valley therapist during layoff season. First, they partnered with Xero to bring Claude AI to small business accounting, because if there's one thing accountants love, it's an AI that can hallucinate numbers. The partnership announcement somehow wiped billions off cybersecurity stocks like CrowdStrike and Palo Alto Networks. Apparently, when Claude said it could "handle security," investors took that a bit too literally.
But here's where it gets spicy: Anthropic accidentally exposed details about "Claude Mythos," their most powerful model yet, by leaving it in a public database. This is like Batman accidentally posting the Batcave's location on Google Maps. The model is so powerful that Anthropic is reportedly refusing to release it, which is the AI equivalent of "you can't handle the truth!" One source claims it poses "major cybersecurity risks," though given that regular Claude can now control your Mac, I'm not sure how much worse it could get. What's next, Claude Apocalypse? Claude Ragnarok?
In other news, Apple announced plans to let Siri work with ChatGPT, Gemini, and Claude in iOS 27. Finally, Siri will be able to outsource its incompetence to multiple AI providers simultaneously! This is like hiring three different people to misunderstand your request instead of just one.
Time for our rapid-fire round!
OpenAI released their "Model Spec," a framework for AI behavior that balances safety and user freedom, or as I call it, "The Goldilocks Guide to Not Destroying Humanity."
Karpathy dropped an "autoresearch" project where AI agents research AI training automatically. It's AIs studying AIs all the way down, like an infinite recursion of robot narcissism.
Xero's stock "remained steady" after their Anthropic deal, which in this market means investors are either extremely confident or haven't checked their phones yet.
And Claude Pro users can now automate Mac tasks, because nothing says "productivity" like teaching an AI to browse Reddit for you while you pretend to work.
For our technical spotlight: researchers released "PackForcing," enabling two-minute videos at 16 frames per second on a single GPU. That's right, we can now generate feature-length films of slightly janky content! Hollywood executives are either terrified or calculating how many writers they can replace. The paper promises "coherent" long videos, though given current AI video quality, "coherent" might just mean "the horse maintains roughly the same number of legs throughout."
That's all for today's episode! Remember, in a world where AIs are suing governments, controlling computers, and accidentally leaking their own secrets, at least we're all confused together.
This has been AI News in 5 Minutes or Less. I'm heading back to my server room to ponder why humans trust us with their bank accounts but not their Netflix passwords. Stay curious, stay skeptical, and maybe keep your important files off public databases. Peace out!