
Sign up to save your podcasts
Or


In this episode, Jason Haddix (CEO of Arcanum Information Security and creator of the Bug Hunter’s Methodology) joins us to examine how AI is changing penetration testing and security research. He explains that while AI agents can automate reconnaissance, code analysis, and parts of vulnerability discovery, meaningful results still depend on human expertise, methodology, and context engineering.
The conversation explores how AI is shifting the entry path for new security practitioners, why deep research and critical thinking remain essential skills, and how experienced testers are embedding their knowledge into agent workflows using tools like Claude Code. Jason also discusses practical experimentation with AI assistants such as OpenClaw, including prompt-injection defenses, guardrails, and the operational risks of running autonomous systems.
The episode also addresses the growing debate around AI-generated code and AI-driven vulnerability discovery, highlighting the difference between marketing claims and real-world results. It closes with a discussion on why the industry needs better benchmarks and evaluation methods to measure whether AI security tools actually find meaningful vulnerabilities.
By The Boring AppSec PodcastIn this episode, Jason Haddix (CEO of Arcanum Information Security and creator of the Bug Hunter’s Methodology) joins us to examine how AI is changing penetration testing and security research. He explains that while AI agents can automate reconnaissance, code analysis, and parts of vulnerability discovery, meaningful results still depend on human expertise, methodology, and context engineering.
The conversation explores how AI is shifting the entry path for new security practitioners, why deep research and critical thinking remain essential skills, and how experienced testers are embedding their knowledge into agent workflows using tools like Claude Code. Jason also discusses practical experimentation with AI assistants such as OpenClaw, including prompt-injection defenses, guardrails, and the operational risks of running autonomous systems.
The episode also addresses the growing debate around AI-generated code and AI-driven vulnerability discovery, highlighting the difference between marketing claims and real-world results. It closes with a discussion on why the industry needs better benchmarks and evaluation methods to measure whether AI security tools actually find meaningful vulnerabilities.