
Sign up to save your podcasts
Or


Penetration testing is changing fast—but not always in the ways the hype suggests. James Wagenheim sits down with fiction author and pentest lead Alex Fox to unpack modern pentesting, real-world escalation paths, and what LLMs mean for both attackers and defenders. We also discuss how LLMs are a poor substitute for human creativity.
In this episode, James and Alex zoom in on what “penetration testing” really looks like in practice—from scoped engagements and vulnerability research (CVEs) to the messy human reality of misconfigurations, credentials, and internal privilege escalation. They discuss why attackers often choose the easiest path (and why that still works), then pivot into how generative AI changes the landscape: more volume, lower barriers, new failure modes—especially prompt injection when LLMs are connected to tools and workflows.
You’ll also hear a pragmatic walkthrough of common internal assessment patterns (including attack-path mapping in Active Directory) and a candid conversation about Alex’s parallel work as a fiction writer—traditional publishing, querying agents, and where tools like Claude/ChatGPT help versus where they dilute craft.
Topics include:
Pentesting scope, CVEs, and “what’s actually exploitable”
Prompt injection and why “more context” can increase risk
AD attack paths: mapping, escalation, and defensive hygiene
Writing and publishing in an era of LLMs
References:
“48 Hours Without AI” (NYT): https://www.nytimes.com/2025/10/28/style/48-hours-without-ai.html?smid=url-share — referenced as an example of cultural pushback / experimentation around AI.
Mindscape #336 (Sean Carroll): https://www.youtube.com/watch?v=S31zEgHVkoA — referenced in the AI fundamentals/history discussion.
CVE Program: https://www.cve.org/ — canonical vulnerability identifier referenced in pentest triage.
OWASP GenAI LLM01 Prompt Injection: https://genai.owasp.org/llmrisk/llm01-prompt-injection/ — a practical framing aligned with the episode’s prompt-injection segment.
BloodHound (SpecterOps): https://github.com/SpecterOps/BloodHound — referenced for AD attack-path mapping.
SharpHound CE docs: https://bloodhound.specterops.io/collect-data/ce-collection/sharphound — the official data-collection guidance tied to BloodHound.
MyChart: https://www.mychart.org/ — referenced as a real-world system where security posture matters.
Please like or subscribe if you enjoyed this episode!
By James WagenheimPenetration testing is changing fast—but not always in the ways the hype suggests. James Wagenheim sits down with fiction author and pentest lead Alex Fox to unpack modern pentesting, real-world escalation paths, and what LLMs mean for both attackers and defenders. We also discuss how LLMs are a poor substitute for human creativity.
In this episode, James and Alex zoom in on what “penetration testing” really looks like in practice—from scoped engagements and vulnerability research (CVEs) to the messy human reality of misconfigurations, credentials, and internal privilege escalation. They discuss why attackers often choose the easiest path (and why that still works), then pivot into how generative AI changes the landscape: more volume, lower barriers, new failure modes—especially prompt injection when LLMs are connected to tools and workflows.
You’ll also hear a pragmatic walkthrough of common internal assessment patterns (including attack-path mapping in Active Directory) and a candid conversation about Alex’s parallel work as a fiction writer—traditional publishing, querying agents, and where tools like Claude/ChatGPT help versus where they dilute craft.
Topics include:
Pentesting scope, CVEs, and “what’s actually exploitable”
Prompt injection and why “more context” can increase risk
AD attack paths: mapping, escalation, and defensive hygiene
Writing and publishing in an era of LLMs
References:
“48 Hours Without AI” (NYT): https://www.nytimes.com/2025/10/28/style/48-hours-without-ai.html?smid=url-share — referenced as an example of cultural pushback / experimentation around AI.
Mindscape #336 (Sean Carroll): https://www.youtube.com/watch?v=S31zEgHVkoA — referenced in the AI fundamentals/history discussion.
CVE Program: https://www.cve.org/ — canonical vulnerability identifier referenced in pentest triage.
OWASP GenAI LLM01 Prompt Injection: https://genai.owasp.org/llmrisk/llm01-prompt-injection/ — a practical framing aligned with the episode’s prompt-injection segment.
BloodHound (SpecterOps): https://github.com/SpecterOps/BloodHound — referenced for AD attack-path mapping.
SharpHound CE docs: https://bloodhound.specterops.io/collect-data/ce-collection/sharphound — the official data-collection guidance tied to BloodHound.
MyChart: https://www.mychart.org/ — referenced as a real-world system where security posture matters.
Please like or subscribe if you enjoyed this episode!