As artificial intelligence becomes a strategic capability for nations as well as companies, questions of governance, safety, and geopolitical competition are moving to the forefront. In this episode of TechSurge, host Sriram Viswanathan speaks with Helen Toner, Interim Executive Director of the Center for Security and Emerging Technology (CSET) at Georgetown and a former OpenAI board member, about the rise of sovereign AI stacks and the global implications of increasingly powerful AI systems.
Helen brings a rare vantage point from both inside the frontier AI ecosystem and the policy world. She reflects on lessons from her time on the OpenAI board, including the governance challenges that arise when nonprofit missions intersect with enormous commercial incentives and rapid technological progress. As AI capabilities accelerate, she argues that the industry is still grappling with deep uncertainty about how these systems work, how they will evolve, and what responsibilities companies and governments should carry.
The conversation explores the idea of sovereign AI; the growing push by countries to control key layers of the AI stack, including compute infrastructure, models, and data. Helen explains why governments increasingly view AI as a strategic national resource, comparable to past transformative technologies like electricity or the internet. At the same time, she cautions that full technological independence may be unrealistic for most nations, given the complexity and global interdependence of the AI supply chain.
Sriram and Helen also examine the evolving US–China AI competition, the role of export controls and semiconductor supply chains, and how different countries, from China to emerging AI hubs in the Middle East, are positioning themselves in the race to build advanced AI capabilities. Along the way, they discuss whether the industry should slow down development, how companies are experimenting with “safety frameworks” for frontier models, and why installing guardrails may be more realistic than attempting to halt progress altogether.
Ultimately, Helen argues that society is entering a period of profound uncertainty. AI is transitioning from a research discipline into a foundational system that will shape economies, security, and daily life. Navigating that transition will require not just technical breakthroughs, but new approaches to governance, transparency, and global cooperation.
If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.
Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.
--
Episode Links
Connect with Helen: linkedin.com/in/helen-toner-4162439a
Learn more about CSET: https://cset.georgetown.edu/
--
Timestamps
03:00 Lessons from the OpenAI Board: Governance in the Age of Frontier AI
05:00 The Big Unknowns in AI Development: Why Experts Still Disagree
12:05 Public Trust and the Risk of an AI Backlash
14:20 When AI Became Infrastructure: From Research Field to Societal System
16:00 Is AGI a Meaningless Term Now? Rethinking the Goalposts
19:05 AI’s True Scale: Internet-Level Impact or Something Bigger?
23:15 Why Frontier AI Labs Struggle to Slow Down
24:40 What “Sovereign AI” Actually Means for Nations
28:10 Mapping the AI Stack: Chips, Cloud, Models, and Applications
33:38 The US–China AI Competition: Who’s Ahead and Why
39:44 China’s Progress in AI: Compute Constraints and Fast Followers
44:03 US AI Policy: Export Controls, Regulation, and Federal Preemption
48:40 Frontier AI Safety Frameworks: How Labs Define Dangerous Capabilities
51:36 The Future of AI: Utopia, Industrialization, or Something Worse?
56:04 Rapid Fire: AI Misconceptions, Governance Reforms, and Regions to Watch