AI has emerged as a critical geopolitical battleground where Washington and Beijing are racing not just for economic advantage, but military dominance. Despite these high stakes, there's surprising little consensus on how—or whether—to respond to frontier AI development.
The polarized landscape features techno-optimists battling AI safety advocates, with the former
dismissing the latter as "doomers" who exaggerate existential risks. Meanwhile, AI business leaders face criticism for potentially overstating their companies' capabilities to attract investors and secure favorable regulations that protect their market positions.
Democrats and civil rights advocates warn that focusing solely on catastrophic risks versus economic prosperity distracts from immediate harms like misinformation, algorithmic discrimination, and synthetic media abuse. U.S. regulatory efforts have struggled, with California's SB 1047 failing last year and Trump repealing Biden's AI Executive Order on inauguration day. Even the future of the U.S. government's AI Safety Institute remains uncertain under the new administration.
With a new administration in Washington, important questions linger: How should government approach AI's national security implications? Can corporate profit motives align with safer outcomes? And if the U.S. and China are locked in an AI arms race, is de-escalation possible, or are we heading toward a digital version of Mutually Assured Destruction?
Joining me to explore these questions are Dan Hendrycks, AI researcher and Director of the Center for AI Safety and co-author of "Superintelligence Strategy," a framework for navigating advanced AI from a national security and geopolitical perspective, and FAI's own Sam Hammond, Senior Economist and AI policy expert.