Future of Data Security

EP 26 — Handshake's Rupa Parameswaran on Mapping Happy Paths to Catch AI Data Leakage


Listen Later

Rupa Parameswaran, VP of Security & IT at Handshake, tackles AI security by starting with mapping happy paths: document every legitimate route for accessing, adding, moving, and removing your crown jewels, then flag everything outside those paths. When vendors like ChatGPT inadvertently get connected to an entire workspace instead of individual accounts (scope creep that she's witnessed firsthand), these baselines become your detection layer. She suggests building lightweight apps that crawl vendor sites for consent and control changes, addressing the reality that nobody reads those policy update emails.

 

Rupa also reflects on the data labeling bottlenecks that block AI adoption at scale. Most organizations can't safely connect AI tools to Google Drive or OneDrive because they lack visibility into what sensitive data exists across their corpus. Regulated industries handle this better, not because they're more sophisticated, but because compliance requirements force the discovery work. Her recommendation for organizations hitting this wall is self-hosted solutions contained within a single cloud provider rather than reverting to bare metal infrastructure. The shift treats security as quality engineering, making just-in-time access and audit trails the default path, not an impediment to velocity.



Topics discussed:

 

  • Mapping happy paths for accessing, adding, moving, and removing crown jewels to establish baselines for anomaly detection systems
  • Building lightweight applications that crawl vendor websites to automatically detect consent and control changes in third-party tools
  • Understanding why data labeling and discovery across unstructured corpus databases blocks AI adoption beyond pilot stage deployments
  • Implementing just-in-time access controls and audit trails as default engineering paths rather than friction points for development velocity
  • Evaluating self-hosted AI solutions within single cloud providers versus bare metal infrastructure for containing data exposure risks
  • Preventing inadvertent workspace-wide AI integrations when individual account connections get accidentally expanded in scope during rollouts
  • Treating security as a pillar of quality engineering to make secure options easier than insecure alternatives for teams
  • Addressing authenticity and provenance challenges in AI-curated data where validation of truthfulness becomes nearly impossible currently
  • ...more
    View all episodesView all episodes
    Download on the App Store

    Future of Data SecurityBy Qohash