TechSurge: Deep Tech Podcast

Governing AI Before It Outpaces Us: Safety for Critical Infrastructure


Listen Later

As generative AI systems move from novelty to infrastructure, questions of safety, trust, and governance are becoming urgent. In this episode of TechSurge, host Sriram Viswanathan is joined by Dr. Rumman Chowdhury, CEO of Humane Intelligence PBC and responsible AI Pioneer, about what AI safety really means and why the industry may be focusing on the wrong problems.

Rumman argues that the most overlooked lever in AI development is evaluation. While companies emphasize model training and capabilities, far less attention is paid to how systems are assessed in real-world contexts, who defines “good,” what risks are measured, and how societal impacts are accounted for at scale. She distinguishes between technical assurance and broader sociotechnical risk, from misinformation and bias to over-reliance and erosion of institutional trust.

Drawing on her experience at Twitter (X) and in global policy circles, Rumman highlights a fundamental governance gap: unlike finance, aviation, or healthcare, AI lacks a mature, independent ecosystem of auditors and evaluators. Today, the same companies building AI systems often define what counts as harm. She also challenges the belief that stronger guardrails alone will solve the problem, noting that cultural context, language differences, and human behavior complicate any notion of “neutral” or fully objective AI.

Rather than focusing solely on speculative existential threats, Rumman urges attention to the harms already visible from AI-enabled misinformation to mental health risks and shifts in how younger generations relate to knowledge and authority. The future of AI, she suggests, will be determined not just by technological breakthroughs, but by whether we build credible systems of accountability, evaluation, and global cooperation around them.

If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.

Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.


Episode Links

  • Connect with Rumman: https://www.linkedin.com/in/rumman
  • Learn more about Humane Intelligence: https://humane-intelligence.org/

Timestamps

  • 02:50 Why AI Evaluations Matter: Defining “Good” Models in Context
  • 04:25 What Is AI Safety? From Product Performance to Societal Harm
  • 11:30 Regulation Reality Check: EU AI Act, Conformance Assessments & Checklists
  • 15:25 Building the AI Evaluation Profession: Audits, Red Teaming & Legal Protections
  • 23:00 When It’s OK to Outsource Judgment and When It’s Dangerous
  • 39:38Who’s Responsible When AI Outcomes Go Wrong? 
  • 52:37 Design vs Governance: Complex Systems, System-Level Evaluation, and Regulating Horizontally
  • 44:11 AI Psychosis, Youth Harm, and What’s Already Here
  • 47:27 What Keeps Rumman Up at Night: Kids, Algorithms, and Hope from Global Governance
  • 54:00 Bringing Sci-Fi to the Real World? 
...more
View all episodesView all episodes
Download on the App Store

TechSurge: Deep Tech PodcastBy Celesta Capital | Deep Tech Venture Capital Firm

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

24 ratings