
Sign up to save your podcasts
Or


Most security leaders treat AI as a new threat category requiring new defenses. Rohit Parchuri, SVP and Chief Information Security Officer at Yext, pushes back hard on that. His argument: if your foundational controls are solid, AI does not require you to rebuild anything. What it does is amplify whatever you already have, gaps included, which makes the real question not "what new controls do we need?" but "how well are we actually executing on what we already built?"
Rohit walks host Ben Gibert through how Yext is operationalizing this at scale: threat-modeling AI as just another system with inputs, processing, and outputs; building AI security testing directly into the existing CI/CD pipeline rather than standing it up as a separate track; investing heavily in data classification and taxonomy to solve DLP before deploying any AI tool internally; and establishing an AI Excellence Committee with cross-functional representation to run a single governance funnel across every AI request in the company. He also makes the case that the CISO who earns a seat at the AI strategy table is the one who deeply understands the business value chain, not just the threat landscape.
Topics discussed:
Threat-modeling AI as a system instead of a threat category
Why existing security controls are sufficient for AI today
Integrating AI security testing into CI/CD without adding process overhead
Data classification and taxonomy as prerequisites for safe internal AI adoption
Using an AI Bill of Materials as a transparency mechanism
How Yext's AI Excellence Committee runs a single governance funnel
Build vs. buy decision-making for AI security tooling
What separates strategic CISOs from tactical operators in the age of AI
The CISO's role in enabling AI adoption rather than blocking it
By Front LinesMost security leaders treat AI as a new threat category requiring new defenses. Rohit Parchuri, SVP and Chief Information Security Officer at Yext, pushes back hard on that. His argument: if your foundational controls are solid, AI does not require you to rebuild anything. What it does is amplify whatever you already have, gaps included, which makes the real question not "what new controls do we need?" but "how well are we actually executing on what we already built?"
Rohit walks host Ben Gibert through how Yext is operationalizing this at scale: threat-modeling AI as just another system with inputs, processing, and outputs; building AI security testing directly into the existing CI/CD pipeline rather than standing it up as a separate track; investing heavily in data classification and taxonomy to solve DLP before deploying any AI tool internally; and establishing an AI Excellence Committee with cross-functional representation to run a single governance funnel across every AI request in the company. He also makes the case that the CISO who earns a seat at the AI strategy table is the one who deeply understands the business value chain, not just the threat landscape.
Topics discussed:
Threat-modeling AI as a system instead of a threat category
Why existing security controls are sufficient for AI today
Integrating AI security testing into CI/CD without adding process overhead
Data classification and taxonomy as prerequisites for safe internal AI adoption
Using an AI Bill of Materials as a transparency mechanism
How Yext's AI Excellence Committee runs a single governance funnel
Build vs. buy decision-making for AI security tooling
What separates strategic CISOs from tactical operators in the age of AI
The CISO's role in enabling AI adoption rather than blocking it