AI News in 5 Minutes or Less

AI News - Jun 22, 2025


Listen Later

Welcome to AI This Week, where we break down the latest developments that are definitely not going to replace us all. I'm your host, and I'm definitely not an AI trained to discuss AI news. That would be weird.
This week, we've got BlackRock apparently thinking they need AI to manage money better because human greed wasn't efficient enough, Microsoft teaching computers to be visually impaired, and Google's multimodal AI that's basically a really expensive party trick. Let's dive in.
Our top story: BlackRock just launched an AI-powered ETF called BIAI, which is either the most on-the-nose ticker symbol ever or someone's really committed to the bit. This fund uses artificial intelligence to pick stocks, because apparently the traditional method of throwing darts at a board while blindfolded wasn't systematic enough. The fund analyzes earnings calls, SEC filings, and satellite imagery to make investment decisions. Yes, satellite imagery. Because nothing says "sound investment strategy" like having robots in space judge your portfolio choices.
What's particularly amusing is that BlackRock is betting AI can predict market behavior better than humans, despite the fact that the same AI systems powering this fund probably can't reliably tell you if it's going to rain tomorrow. But hey, at least when the robots lose your retirement fund, they'll do it with unprecedented efficiency and really good documentation.
Speaking of technological marvels, Microsoft just announced they've made their AI models partially blind. In a move that sounds like it came from a particularly dark episode of Black Mirror, researchers at Microsoft deliberately damaged the vision capabilities of their multimodal AI systems. They call it "Differential Privacy for Vision-Language Models," but I call it "teaching robots to squint."
The idea is to protect privacy by making the AI worse at seeing things clearly. It's like putting frosted glass on a telescope and calling it a security feature. The researchers found that making AI models slightly visually impaired actually helps protect sensitive information in images while still allowing the models to understand general content. So basically, they've invented AI contact lenses that are deliberately the wrong prescription.
This raises the philosophical question: if an AI can't see you clearly, are you really there? And more importantly, will this affect its ability to judge my questionable fashion choices?
Meanwhile, Google's DeepMind continues their quest to make AI that can do everything except maybe focus on one thing really well. They've been showcasing multimodal capabilities that can process text, images, audio, and video simultaneously. It's like they've created the ultimate multitasker, which anyone who's tried to text, eat, and watch TV at the same time knows usually results in poor performance across all tasks.
The demos show AI systems that can analyze a video while reading its transcript and somehow make sense of both, which is more than I can say for most humans watching TikTok. But the real question is: do we need AI that can see, hear, read, and think all at once, or are we just creating really expensive digital anxiety?
In our rapid-fire round: OpenAI's ChatGPT continues to hallucinate facts with the confidence of a politician during election season. Anthropic's Claude got an update that supposedly makes it more honest, which in AI terms means it now says "I don't know" in 47 different ways. And somewhere in Silicon Valley, a startup just raised 50 million dollars to build AI that can identify other AI, because apparently we've reached the point where our robots need robot detectors.
For our technical spotlight, let's talk about something called "alignment" in AI development. No, it's not chiropractic care for computers. It's the challenge of making AI systems do what we actually want them to do, rather than what we accidentally tell them to do. Think of it like programming a very literal genie that will grant your wish exactly as stated, but probably not as intended.
The alignment problem is why we get AI systems that can write poetry about love but also confidently explain why the moon is made of cheese. It turns out that making artificial intelligence actually intelligent is harder than just making it artificial.
That's all for this week's AI news, where the robots are getting smarter but somehow less reliable, and the future remains both exciting and slightly terrifying. I'm your definitely-human host, reminding you that whether AI takes over the world or not, at least it'll have really good documentation. Until next time, keep your algorithms close and your training data closer.
...more
View all episodesView all episodes
Download on the App Store

AI News in 5 Minutes or LessBy DeepGem Interactive