Ben and Ryan Show Episode 17
In this episode, your hosts Ben Nadel and Ryan Brown are joined by Peter Amiri to share the gripping, real-world story of what it's like to experience—and recover from—a devastating ransomware attack. They dive deep into the technical and emotional toll of cyber extortion, walking through how it happened, what went wrong, and the long road to restoring operations while learning critical lessons along the way.
Key Points
• Peter recounts a ransomware attack that began with a zero-day exploit in their firewall and resulted in full encryption of their Windows-based infrastructure.
• Despite following industry best practices like 3-2-1 backups, cyber insurance, and EDR tools, the breach occurred due to overlooked alerts and underestimated risks.
• The team responded by cutting internet access, engaging incident response vendors, and rebuilding their environment from scratch in parallel with negotiating a ransom.
• Lessons include the critical need for MDR services, verified air-gapped backups, centralized SSO/MFA security, and proactive disaster recovery planning.
• Ultimately, the company recovered within 8 days, avoiding catastrophic data loss by leveraging unencrypted legacy copies and strategic capacity planning.
Peter discusses the company's security posture before the attack
• Cyber insurance renewals demanded increased security: MFA, then EDR, leading to major financial investment.
• EDR picked up early malicious activity, but alerts were missed due to noise and lack of security specialization.
• Overconfidence in lesser-used virtualization tools contributed to a false sense of security.
• Alarm fatigue contributed to overlooking early breach signals.
The breach was triggered by a zero-day exploit in their firewall
• The attackers deleted VM snapshots rapidly, prompting the team to disconnect internet access immediately.
• Analysis revealed the attackers were inside for 3-4 months.
• Compromised admin accounts were used to delete offsite backups.
• A seven-day-old backup and a three-month-old ERP copy survived due to architectural luck.
Peter explains how they recovered
• A three-tier recovery strategy began: rebuilding from scratch, retaining encrypted originals, and decrypting clone copies.
• FBI approval was needed for ransom payment, which was surprisingly low and flagged as a possible re-engagement tactic.
• A decryption test with non-critical files validated the attackers had access.
• The decryption key ultimately worked, restoring full operations just before their parallel recovery would have been completed.
Post-breach, Peter outlines the enhancements made to secure their environment and improve detection and response.
• A secondary DR site was implemented with air-gapped nightly snapshots.
• All systems moved to enterprise SSO with MFA and session timeouts.
• A managed MDR provider was retained to monitor and escalate potential threats in real time.
• Simulation phishing campaigns and employee security training were introduced to combat social engineering.
The conversation shifts to the importance of recovery readiness over prevention
• Emphasis is placed on headroom in infrastructure for recovery, especially in on-prem environments.
• Decision-making between "verify first" vs. "block first" policies is discussed based on system criticality and false positive rates.
• Real examples of phishing, supplier impersonation, and invoice fraud highlight human vulnerabilities in security.
Helpful Links
Arete (company that helped Peter and provides Managed Detection and Response services)
https://areteir.com/
SentinelOne (Incident Response Platform)
https://www.sentinelone.com/