
Sign up to save your podcasts
Or


Farzan Karimi, Deputy CISO at Moderna, joins Amir Bormand for a sharp conversation on one of the most misunderstood areas in cybersecurity, the ethics of offensive security. From red team rules of engagement to nation state deception and the limits of AI in security testing, this episode gets into what happens when the job requires you to think like an attacker without crossing the line.
This is a practical conversation for security leaders, engineers, and operators who want a clearer view into how modern security programs actually work under pressure. Farzan shares hard lessons from his own career, explains why red teaming is really about business risk, and makes the case for storytelling over dashboards when security teams need executive buy in.
Key Takeaways
• Offensive security is not about finding every weakness. It is about simulating what a real attacker would do to reach the business’s worst case scenario.
• The gray area is real. Just because you are authorized to test a system does not mean every possible action is justified.
• Nation state level threats force teams to think differently. Attackers look across the connective tissue of systems, not just isolated tools or apps.
• Good red teaming can make the rest of the business stronger by helping teams see real risk, align on priorities, and justify investment.
• AI can speed up security work, but it still misses too much to replace experienced human operators.
Timestamped Highlights
02:02 What offensive security actually means, and why the best programs are built around business impact, not just technical findings.
03:46 Where the ethical gray area starts, from phishing and social engineering to the personal judgment calls that can end careers.
06:03 A story from Farzan’s Microsoft days that shows how a valid finding can still go too far when judgment slips.
11:06 Why security leaders have to explain to executives that attackers do not care about internal process, approvals, or red tape.
14:46 A nation state honeypot turned the red team into the target, and forced a complete shift in approach.
24:14 AI is changing the workflow, but Farzan explains why current tools still fall short of real red team depth.
A line worth remembering
“Just because you can doesn’t mean you should abuse those permissions.”
Pro Tips
• Tie offensive security work to the business’s real doomsday scenario, not a generic list of vulnerabilities.
• When you find a serious issue, know exactly where the rules of engagement stop, and stop there.
• Use attack stories and patterns to earn trust internally. Raw metrics rarely move people the same way.
• Treat AI as an accelerator, not a replacement for experienced security judgment.
Listen and follow
If this episode gave you a better lens on how modern security teams think, subscribe to The Tech Trek, follow the show, and share this episode with someone building, securing, or scaling technology in the real world.
By Elevano5
7474 ratings
Farzan Karimi, Deputy CISO at Moderna, joins Amir Bormand for a sharp conversation on one of the most misunderstood areas in cybersecurity, the ethics of offensive security. From red team rules of engagement to nation state deception and the limits of AI in security testing, this episode gets into what happens when the job requires you to think like an attacker without crossing the line.
This is a practical conversation for security leaders, engineers, and operators who want a clearer view into how modern security programs actually work under pressure. Farzan shares hard lessons from his own career, explains why red teaming is really about business risk, and makes the case for storytelling over dashboards when security teams need executive buy in.
Key Takeaways
• Offensive security is not about finding every weakness. It is about simulating what a real attacker would do to reach the business’s worst case scenario.
• The gray area is real. Just because you are authorized to test a system does not mean every possible action is justified.
• Nation state level threats force teams to think differently. Attackers look across the connective tissue of systems, not just isolated tools or apps.
• Good red teaming can make the rest of the business stronger by helping teams see real risk, align on priorities, and justify investment.
• AI can speed up security work, but it still misses too much to replace experienced human operators.
Timestamped Highlights
02:02 What offensive security actually means, and why the best programs are built around business impact, not just technical findings.
03:46 Where the ethical gray area starts, from phishing and social engineering to the personal judgment calls that can end careers.
06:03 A story from Farzan’s Microsoft days that shows how a valid finding can still go too far when judgment slips.
11:06 Why security leaders have to explain to executives that attackers do not care about internal process, approvals, or red tape.
14:46 A nation state honeypot turned the red team into the target, and forced a complete shift in approach.
24:14 AI is changing the workflow, but Farzan explains why current tools still fall short of real red team depth.
A line worth remembering
“Just because you can doesn’t mean you should abuse those permissions.”
Pro Tips
• Tie offensive security work to the business’s real doomsday scenario, not a generic list of vulnerabilities.
• When you find a serious issue, know exactly where the rules of engagement stop, and stop there.
• Use attack stories and patterns to earn trust internally. Raw metrics rarely move people the same way.
• Treat AI as an accelerator, not a replacement for experienced security judgment.
Listen and follow
If this episode gave you a better lens on how modern security teams think, subscribe to The Tech Trek, follow the show, and share this episode with someone building, securing, or scaling technology in the real world.