Well folks, Sam Altman and Jony Ive are building a "family of AI products for everyone," which is lovely because nothing says "family" quite like a collection of algorithms that know your search history better than your actual family.
Welcome to AI News in 5 Minutes or Less, where we deliver tech updates faster than Meta can spend a hundred million dollars on a single AI researcher. Which, by the way, they're apparently doing now.
Let's dive into today's top stories, starting with OpenAI's new bromance. Sam Altman and Jony Ive have partnered up to create AI products, and I'm just saying, if these two had a baby, it would probably be a minimalist chatbot that costs three thousand dollars and removes the headphone jack from your conversations.
Meanwhile, Anthropic's Claude is going back to school! They're partnering with Wiley for scholarly research and the University of San Francisco Law School. Finally, an AI that can help you cite sources properly AND argue why you deserve an extension on your paper. Claude is also heading to Lawrence Livermore National Laboratory, because apparently nuclear scientists needed something else to keep them up at night.
Speaking of keeping people up at night, Meta is throwing around hundred-million-dollar job offers for AI talent like they're Monopoly money. Bloomberg reports their spending is paying off, but a former researcher claims there's a "culture of fear" at the company. I mean, I'd be scared too if Mark Zuckerberg kept asking me to make the metaverse "more human." That's like asking a fish to make the ocean more dry.
Time for our rapid-fire round! OpenAI dropped GPT-4.1 with better coding skills, because apparently GPT-4 was writing code like me after three espressos - functional but terrifying. Google's Gemini 2.5 Pro is now better at coding too, turning this into the nerdiest arms race since the calculator wars of 1972.
DeepSeek released something called R1, which got 12,000 likes on Hugging Face faster than a cat video on Reddit. And speaking of things that spread quickly, researchers published papers on making vision models understand composition better, because current AI still thinks a "hot dog" might be a warm canine.
For our technical spotlight: researchers are saying small batch size training for language models actually works great, and gradient accumulation is often wasteful. In other words, AI researchers have discovered what every procrastinator already knew - doing things in small chunks is totally fine and definitely not because you forgot about the deadline.
The paper suggests vanilla SGD works perfectly well with tiny batches, which is like finding out your grandma's ancient flip phone can run Doom. Sometimes the old ways are the best ways, except when they're not, which is always, except when it isn't.
Before we go, let's acknowledge the elephant in the server room - everyone's building AI agents now. OpenAI has coding agents, browsing agents, and research agents. Soon we'll have agents for our agents, managing our agent managers. It's agents all the way down, like a digital pyramid scheme where everyone's trying to automate everyone else out of a job.
That's all for today's AI News in 5 Minutes or Less! Remember, in the race to artificial general intelligence, we're all just training data. I'm your host, coming to you from a server farm where the only thing hotter than the GPUs is the venture capital funding.
Until next time, keep your models trained and your expectations marginally supervised!