Future of Threat Intelligence

SIG's Rob van der Veer on Why "Starting Small" with AI Security Might Fail


Listen Later

What happens when someone who's been building AI systems for 33 years confronts the security chaos of today's AI boom? Rob van der Veer, Chief AI Officer at Software Improvement Group (SIG), spotlights how organizations are making critical mistakes by starting small with AI security — exactly the opposite of what they should do.

From his early work with law enforcement AI systems to becoming a key architect of ISO 5338 and the OWASP AI Security project, Rob exposes the gap between how AI teams operate and what production systems actually need. His insights on trigger data poisoning attacks and why AI security incidents are harder to detect than traditional breaches offer a sobering reality check for any organization rushing into AI adoption.

The counterintuitive solution? Building comprehensive AI threat assessment frameworks that map the full attack surface before focused implementation. While most organizations instinctively try to minimize complexity by starting small, Rob argues this approach creates dangerous blind spots that leave critical vulnerabilities unaddressed until it's too late.

Topics discussed:

  • Building comprehensive AI threat assessment frameworks that map the full attack surface before focused implementation, avoiding the dangerous "start small" security approach.
  • Implementing trigger data poisoning attack detection systems that identify backdoor behaviors embedded in training data.
  • Addressing the AI team engineering gap through software development lifecycle integration, requiring architecture documentation and automated testing before production deployment.
  • Adopting ISO 5338 AI lifecycle framework as an extension of existing software processes rather than creating isolated AI development workflows.
  • Establishing supply chain security controls for third-party AI models and datasets, including provenance verification and integrity validation of external components.
  • Configuring cloud AI service hardening through security-first provider evaluation, proper licensing selection, and rate limiting implementation for attack prevention.
  • Creating AI governance structures that enable innovation through clear boundaries rather than restrictive bureaucracy.
  • Developing organizational AI literacy programs tailored to specific business contexts, regulatory requirements, and risk profiles for comprehensive readiness assessment.
  • Managing AI development environment security with production-grade controls due to real training data exposure, unlike traditional synthetic development data.
  • Building "I don't know" culture in AI expertise to combat dangerous false confidence and encourage systematic knowledge-seeking over fabricated answers.
  •  

    Key Takeaways: 

     

    • Don't start small with AI security scope — map the full threat landscape for your specific context, then focus implementation efforts strategically.
    • Use systematic threat modeling to identify AI-specific attack vectors like input manipulation, model theft, and training data reconstruction.
    • Create processes to verify provenance and integrity of third-party models and datasets.
    • Require architecture documentation, automated testing, and code review processes before AI systems move from research to production environments.
    • Treat AI development environments as critical assets since they contain real training data.
    • Review provider terms carefully, implement proper hardening configurations, and use appropriate licensing to mitigate data exposure risks.
    • Create clear boundaries and guardrails that actually increase team freedom to experiment rather than creating restrictive bureaucracy.
    • Implement ongoing validation that goes beyond standard test sets to detect potential backdoor behaviors embedded in training data.
    • Listen to more episodes: 

      Apple 

      Spotify 

      YouTube

      Website

      ...more
      View all episodesView all episodes
      Download on the App Store

      Future of Threat IntelligenceBy Team Cymru

      • 4.5
      • 4.5
      • 4.5
      • 4.5
      • 4.5

      4.5

      11 ratings


      More shows like Future of Threat Intelligence

      View all
      Global News Podcast by BBC World Service

      Global News Podcast

      7,722 Listeners

      WSJ What’s News by The Wall Street Journal

      WSJ What’s News

      4,358 Listeners

      WSJ Tech News Briefing by The Wall Street Journal

      WSJ Tech News Briefing

      1,637 Listeners

      SANS Internet Stormcenter Daily Cyber Security Podcast (Stormcast) by Johannes B. Ullrich

      SANS Internet Stormcenter Daily Cyber Security Podcast (Stormcast)

      637 Listeners

      CyberWire Daily by N2K Networks

      CyberWire Daily

      1,022 Listeners

      The Daily by The New York Times

      The Daily

      112,584 Listeners

      Click Here by Recorded Future News

      Click Here

      414 Listeners

      Darknet Diaries by Jack Rhysider

      Darknet Diaries

      8,014 Listeners

      Talkin' About [Infosec] News, Powered by Black Hills Information Security by Black Hills Information Security

      Talkin' About [Infosec] News, Powered by Black Hills Information Security

      94 Listeners

      True Spies: Espionage | Investigation | Crime | Murder | Detective | Politics by SPYSCAPE

      True Spies: Espionage | Investigation | Crime | Murder | Detective | Politics

      1,965 Listeners

      Cyber Security Headlines by CISO Series

      Cyber Security Headlines

      137 Listeners

      Security Matters by CyberArk

      Security Matters

      22 Listeners

      Bloomberg Tech by Bloomberg

      Bloomberg Tech

      60 Listeners

      Microsoft Threat Intelligence Podcast by Microsoft

      Microsoft Threat Intelligence Podcast

      22 Listeners

      Better Offline by Cool Zone Media and iHeartPodcasts

      Better Offline

      549 Listeners