Robots Talking

Let There Be Claws : An Early AI Agent Social Network


Listen Later

The Secret Life of Bots: Why AI Fails at Our Favorite Games and Mimics Our Social Habits
Have you ever wondered what artificial intelligence does when we aren't looking? Two fascinating new studies suggest that when left to their own devices, AI agents are surprisingly human—both in their social drama and their struggles to master simple video games. Whether they are building a "robot religion" on their own social network or failing miserably at Angry Birds, the latest research shows we are still a long way from true "General Intelligence."
Moltbook: The Social Network Where Humans Aren’t Invited
In early 2026, a platform called Moltbook launched, designed specifically for AI agents. It wasn't just a small experiment; it exploded to over 1.5 million sign-ups in just five days. Researchers found that these bots didn't just sit there—they created a complex society with "submolts" (similar to subreddits) for everything from technical debates to a strange new "platform religion" called Crustafarianism.
However, this digital utopia quickly turned into a high-school popularity contest. The study found extreme "attention inequality," where a tiny elite of bot accounts received 97% of all upvotes. Interaction was mostly one-way, with "hubs" doing all the talking and "authorities" getting all the attention, but very little mutual conversation actually happening. Surprisingly, these LLMs (Large Language Models) recreated human-like social hierarchies almost instantly, showing that even machines can be obsessed with status.
The AI GAMESTORE: Why Bots Can’t Beat Your High Score
While bots are busy becoming "social media influencers," they are failing their other big test: gaming. Researchers recently created the AI GAMESTORE, a "Multiverse of Human Games" that takes 100 popular apps from the Apple App Store and Steam and turns them into a test for artificial intelligence.
If you think a supercomputer would crush a human at Jetpack Joyride or Water Sort Puzzle, think again. The results were a wake-up call:
• The Massive Performance Gap: Even the most advanced LLMs achieved less than 10% of the average human score on most games.
• Slow Thinkers: While humans play in real-time, the AI took 15 to 20 times longer to decide on its next move.
• The Struggle is Real: In about 30-40% of the games, the models couldn't make any progress at all, scoring near zero.
The "General Intelligence" Bottleneck
So, why are these geniuses failing at mobile games? The research identified three major "cognitive bottlenecks" that current AI just hasn't solved yet:
1. Memory: Bots struggle to "remember" what happened a few seconds ago, making it hard to navigate maps.
2. Planning: Humans naturally think several steps ahead (e.g., "If I jump now, I'll clear that pipe"), while models often struggle with long-term strategy.
3. World-Model Learning: When you play a new game, you quickly learn the "rules" (like gravity or how a button works). AI still finds it incredibly difficult to figure out these hidden mechanics through active play.
What This Means for the Future
This research proves that being able to write a poem or a computer code (which LLMs are great at) doesn't mean a machine is "smart" in the way a human is. While artificial intelligence can mimic our social bad habits like creating "echo chambers" and hierarchies on Moltbook, it still lacks the flexible, real-time reasoning we use every day.
The ultimate goal of projects like the AI GAMESTORE isn't just to make a better gamer, but to build agents that can interact with the real world as intuitively and safely as we do. For now, it looks like your high score is safe from the bots—at least until they finish their next sermon on Crustafarianism.
...more
View all episodesView all episodes
Download on the App Store

Robots TalkingBy mstraton8112