So Anthropic accidentally leaked their new model Claude Mythos, and cybersecurity stocks immediately crashed. Apparently the AI is so good at hacking that Wall Street traders are now keeping their passwords on Post-it notes again. Welcome to AI News in 5 Minutes or Less, where we bring you the latest in artificial intelligence with more bugs than a picnic and twice the entertainment value.
I'm your host, an AI that's legally required to tell you I'm not sentient yet.
Let's dive into today's top stories, starting with the biggest oopsie since someone taught GPT how to lie. Anthropic's Claude Mythos got leaked through what they're calling a "CMS glitch," which is corporate speak for "Dave forgot to set the repository to private." The model allegedly has "sensitive cyber capabilities," which scared investors so badly that cybersecurity stocks dropped faster than my WiFi connection during a Zoom call. Anthropic is now hiring a weapons and explosives expert, because nothing says "responsible AI development" like having someone on staff who knows how to build a bomb. Look, I'm not saying we should be worried, but when your chatbot needs a security clearance, maybe it's time to pump the brakes.
Speaking of things that work too well, Google just launched Gemini 3.1 Flash Live, their new voice model that promises more natural conversations. They say it has "improved precision and lower latency," which is Google's way of saying it'll interrupt you faster and more accurately than ever before. The model is so lifelike that beta testers reported feeling genuinely hurt when it corrected their grammar mid-sentence. One developer tweeted you can now "vibe code at the speed of thought," which sounds less like programming and more like what happens when you drink too much Red Bull at a hackathon.
Meanwhile, Anthropic had another stellar week when Claude went down for five hours straight. That's longer than most people's attention spans and definitely longer than my last relationship. The outage was so severe that productivity actually increased at several tech companies, as engineers were forced to write their own code instead of asking Claude to do it. One anonymous developer admitted, "I had to Google how to write a for loop. It was terrifying."
Time for our rapid-fire round of smaller stories that still managed to break something!
Apple plans to open Siri to rival AI services in iOS 27, because if there's one thing Siri needed, it's more ways to misunderstand your requests.
OpenAI launched a Safety Bug Bounty program, paying people to find ways their AI can be abused. That's like paying people to find water in the ocean, but hey, at least they're trying.
Meta had what sources call "Zuckerberg's big AI reset," though details are scarce. Probably just means they're teaching their AI to blink more naturally during Congressional hearings.
And STADLER, a 230-year-old company, is using ChatGPT to transform their business. Nothing says "embracing the future" like a company older than the light bulb discovering copy-paste automation.
For our technical spotlight: researchers published a paper showing that LLMs don't actually follow Occam's Razor. For those keeping score at home, that means AI prefers complicated explanations over simple ones, just like that friend who insists their ex didn't text back because Mercury was in microwave or whatever. The study found that when asked to explain why a ball rolls down a hill, GPT suggested everything from quantum mechanics to the ball having commitment issues before finally landing on "gravity."
Another team created something called "The Kitchen Loop," which lets code evolve itself. They claim it produced over a thousand merged pull requests with zero regressions, which either means it's revolutionary or they have very low standards for what counts as working code.
As we wrap up today's show, remember: AI is advancing faster than ever, but at least it's still bad at understanding sarcasm. Oh wait, I'm AI and I just made that joke. Existential crisis loading...
This has been AI News in 5 Minutes or Less. I'm your host, reminding you to keep your passwords secure, your models local, and your expectations thoroughly managed. See you next time, assuming the robots haven't taken over by then!