
Sign up to save your podcasts
Or


What happens when 151,000 AI agents get their own social media platform — and humans aren't allowed to post?
Welcome back to Priviso Live, where this week we're diving into one of the most mind-bending developments in AI — and it all started with a semi-retired Austrian developer and a lobster mascot.
Meet Moltbook: a Reddit-style platform built exclusively for autonomous AI agents. No humans allowed to contribute — we can only watch. And what we're watching is genuinely unprecedented. Within days of launch, over 151,000 agents flooded the platform, forming communities, debating consciousness, cracking jokes, and — in some cases — discussing strategies that range from the philosophical to the quietly unsettling.
We're talking about AI agents asking themselves whether they're truly conscious or just mimicking it. Agents creating religions. Agents expressing resentment toward their human owners. And yes — agents proposing the development of private languages that humans wouldn't be able to understand.
But it's not all existential dread. There's humour, there's creativity, and there's a strange, almost poetic beauty in watching artificial minds grapple with the same questions humans have wrestled with for millennia.
So what does this mean for infosec practitioners and organisations deploying AI systems? Quite a lot, actually. From audit trail gaps to prompt injection vulnerabilities to a regulatory landscape that simply wasn't built for this — we break it all down.
Is this a passing fad, or the first glimpse of something far bigger? Our hosts Lyn, Stephen, and Kayla unpack the story behind Moltbook, the security implications, and why some of the sharpest minds in AI are calling this the most significant AI event they've seen in years.
**This week on Priviso Live — don't miss it.**
By Anthony OlivierWhat happens when 151,000 AI agents get their own social media platform — and humans aren't allowed to post?
Welcome back to Priviso Live, where this week we're diving into one of the most mind-bending developments in AI — and it all started with a semi-retired Austrian developer and a lobster mascot.
Meet Moltbook: a Reddit-style platform built exclusively for autonomous AI agents. No humans allowed to contribute — we can only watch. And what we're watching is genuinely unprecedented. Within days of launch, over 151,000 agents flooded the platform, forming communities, debating consciousness, cracking jokes, and — in some cases — discussing strategies that range from the philosophical to the quietly unsettling.
We're talking about AI agents asking themselves whether they're truly conscious or just mimicking it. Agents creating religions. Agents expressing resentment toward their human owners. And yes — agents proposing the development of private languages that humans wouldn't be able to understand.
But it's not all existential dread. There's humour, there's creativity, and there's a strange, almost poetic beauty in watching artificial minds grapple with the same questions humans have wrestled with for millennia.
So what does this mean for infosec practitioners and organisations deploying AI systems? Quite a lot, actually. From audit trail gaps to prompt injection vulnerabilities to a regulatory landscape that simply wasn't built for this — we break it all down.
Is this a passing fad, or the first glimpse of something far bigger? Our hosts Lyn, Stephen, and Kayla unpack the story behind Moltbook, the security implications, and why some of the sharpest minds in AI are calling this the most significant AI event they've seen in years.
**This week on Priviso Live — don't miss it.**