Proxima.Earth — Geopolitical Podcast

Anthropic


Listen Later

On Friday, February 27, 2026, Defense Secretary Pete Hegseth designated Anthropic -- an American AI company whose model Claude was already integrated into classified Pentagon systems -- a "supply chain risk to national security." The designation, previously reserved for foreign adversarial companies like Kaspersky and Huawei, bars every military contractor in the United States from conducting commercial activity with Anthropic. The reason: Anthropic refused to drop two contractual guardrails prohibiting mass surveillance of Americans and fully autonomous weapons.

This episode traces the full arc of the confrontation -- from Claude's presence during the January 3 Maduro raid in Caracas, through the Palantir phone call that triggered the crisis, through seven weeks of escalation culminating in presidential threats of criminal prosecution and the unprecedented supply chain designation. It examines the legal instruments being repurposed (FASCSA, the Defense Production Act), the historical pattern connecting Qwest's Joseph Nacchio to AT&T's Room 641A to Apple's fight with the FBI to Anthropic's stand against the Pentagon, and the structural paradox at the center: on the same day the Pentagon blacklisted Anthropic for maintaining two safety restrictions, it accepted identical restrictions from OpenAI.

This is also the first episode produced using a four-model AI research pipeline in which one of the models is made by a company at the center of the story. The same research prompt was submitted to Gemini, Grok, ChatGPT Pro, and Claude. Their responses -- and the systematic behavioral differences between them -- are part of the narrative. Full process transparency is documented.

Ninety-seven sources. Ten-section research brief. Five-tier confidence taxonomy. Available at proxima.earth

---

AI Transparency: This episode was produced using Claude Opus 4.6 (Anthropic) for primary synthesis and narrative, with parallel research from Google Gemini, xAI Grok, and OpenAI ChatGPT Pro. Human editorial direction at every stage. Claude is made by Anthropic, a principal actor in the events described. This conflict of interest is disclosed within the episode and mitigated through the four-model cross-checking methodology documented in the research brief. All named individuals and quoted material are drawn from verifiable public sources.

Story ID: TAR-2026-002

...more
View all episodesView all episodes
Download on the App Store

Proxima.Earth — Geopolitical PodcastBy Proxima.Earth