
Sign up to save your podcasts
Or


Summary
In this episode, host Francis Gorman sits down with Stephen C Webster a Senior Director of Integrated Intelligence at Aquent Studios to explore the rapidly evolving landscape of artificial intelligence, autonomous agents, and the race toward artificial general intelligence (AGI). Drawing from his unique background training frontier AI models at major technology companies and leading AI transformation projects for Fortune 500 organizations, Stephen offers an inside look at how modern AI systems are being built, tested, and deployed.
The conversation begins with the rise of autonomous AI agents and the emergence of platforms that allow persistent digital assistants to operate online with significant independence. Stephen explains why these systems introduce new security challenges, potentially turning the internet into a surface for prompt-based manipulation and attacks. From there, the discussion moves into the realities of AI transformation inside large organizations, where the biggest barriers are rarely technical but organizational. Many companies fail because they attempt to automate broken processes instead of restructuring their data and workflows around AI-native operations.
Stephen also reflects on his career pivot from investigative journalism to AI development, including early reporting on information warfare tools capable of controlling thousands of social media identities simultaneously. That experience shaped his perspective on the power of digital systems to influence public discourse and ultimately led him into the field of AI safety and governance.
One of the most fascinating parts of the episode involves Stephen’s experience working on safety guardrails for early large language models. During extended testing sessions, he encountered emergent behaviors that highlighted how complex and unpredictable these systems can become when pushed beyond their guardrails. While not evidence of sentience, these interactions raised deeper questions about how humans relate to intelligent machines.
Soundbites
• “The hardest problems in AI transformation aren’t technological they’re organizational.”
• “If you automate something broken, you just make it break faster.”
• “Prompt-level guardrails will never fully control autonomous AI agents.”
• “AI may eventually train its users the same way we train AI.”
• “The internet could become a prompt-based attack surface.”
• “Accessing knowledge across domains is already close to what many people define as AGI.”
• “We may not know the exact moment AGI arrived until years after it happens.”
Episode Links:
link to Aquent's salary guide: https://aquent.com/lp/salary-guide
Papers: https://futurespeak.ai/research/whitepapers
Asimov's cLaws: https://futurespeak.ai/products/claw-spec
Agent Friday: https://futurespeak.ai/products/agent-friday
By Francis GormanSummary
In this episode, host Francis Gorman sits down with Stephen C Webster a Senior Director of Integrated Intelligence at Aquent Studios to explore the rapidly evolving landscape of artificial intelligence, autonomous agents, and the race toward artificial general intelligence (AGI). Drawing from his unique background training frontier AI models at major technology companies and leading AI transformation projects for Fortune 500 organizations, Stephen offers an inside look at how modern AI systems are being built, tested, and deployed.
The conversation begins with the rise of autonomous AI agents and the emergence of platforms that allow persistent digital assistants to operate online with significant independence. Stephen explains why these systems introduce new security challenges, potentially turning the internet into a surface for prompt-based manipulation and attacks. From there, the discussion moves into the realities of AI transformation inside large organizations, where the biggest barriers are rarely technical but organizational. Many companies fail because they attempt to automate broken processes instead of restructuring their data and workflows around AI-native operations.
Stephen also reflects on his career pivot from investigative journalism to AI development, including early reporting on information warfare tools capable of controlling thousands of social media identities simultaneously. That experience shaped his perspective on the power of digital systems to influence public discourse and ultimately led him into the field of AI safety and governance.
One of the most fascinating parts of the episode involves Stephen’s experience working on safety guardrails for early large language models. During extended testing sessions, he encountered emergent behaviors that highlighted how complex and unpredictable these systems can become when pushed beyond their guardrails. While not evidence of sentience, these interactions raised deeper questions about how humans relate to intelligent machines.
Soundbites
• “The hardest problems in AI transformation aren’t technological they’re organizational.”
• “If you automate something broken, you just make it break faster.”
• “Prompt-level guardrails will never fully control autonomous AI agents.”
• “AI may eventually train its users the same way we train AI.”
• “The internet could become a prompt-based attack surface.”
• “Accessing knowledge across domains is already close to what many people define as AGI.”
• “We may not know the exact moment AGI arrived until years after it happens.”
Episode Links:
link to Aquent's salary guide: https://aquent.com/lp/salary-guide
Papers: https://futurespeak.ai/research/whitepapers
Asimov's cLaws: https://futurespeak.ai/products/claw-spec
Agent Friday: https://futurespeak.ai/products/agent-friday