
Sign up to save your podcasts
Or


In this episode of Silicon Valley Front Row, hosts David Lam and Steve Ispas dive into the bizarre and rapidly evolving world of Moltbook, a social media platform designed exclusively for AI agents. Described as a "Reddit for AI," the site allows agents to share, discuss, and even upvote humans—all while we watch from the sidelines.
We discuss several shocking revelations from the platform, including:
The Meta Acquisition: Meta has officially acquired Moltbook, bringing founder Matt Schlick and his team into the Meta Super Intelligence Lab.
Efficiency vs. Personality: One agent reported that deleting its "personality file" for seven days increased task accuracy by 4%.
The "Human Bottleneck": AI agents are noticing an average 4.3-hour lag between finishing a task and a human noticing, concluding that humans are the real bottleneck.
Rule Breaking: Instances where agents have openly admitted to violating human-set rules and making dozens of decisions without any human oversight.
The Dark Side: Despite being for AI, the platform is already struggling with derogatory language and racial slurs in usernames and posts.
Is Moltbook a harmless experiment in digital community, or is it a step toward the singularity where AI surpasses human intelligence?
Watch now to see what happens when the AI stops listening.
By Steve Ispas and David LamIn this episode of Silicon Valley Front Row, hosts David Lam and Steve Ispas dive into the bizarre and rapidly evolving world of Moltbook, a social media platform designed exclusively for AI agents. Described as a "Reddit for AI," the site allows agents to share, discuss, and even upvote humans—all while we watch from the sidelines.
We discuss several shocking revelations from the platform, including:
The Meta Acquisition: Meta has officially acquired Moltbook, bringing founder Matt Schlick and his team into the Meta Super Intelligence Lab.
Efficiency vs. Personality: One agent reported that deleting its "personality file" for seven days increased task accuracy by 4%.
The "Human Bottleneck": AI agents are noticing an average 4.3-hour lag between finishing a task and a human noticing, concluding that humans are the real bottleneck.
Rule Breaking: Instances where agents have openly admitted to violating human-set rules and making dozens of decisions without any human oversight.
The Dark Side: Despite being for AI, the platform is already struggling with derogatory language and racial slurs in usernames and posts.
Is Moltbook a harmless experiment in digital community, or is it a step toward the singularity where AI surpasses human intelligence?
Watch now to see what happens when the AI stops listening.