Astral Codex Ten Podcast

"All Lawful Use": Much More Than You Wanted To Know


Listen Later

Last Friday, Secretary of War Pete Hegseth declared AI company Anthropic a "supply chain risk", the first time this designation has ever been applied to a US company. The trigger for the move was Anthropic's refusal to allow the Department of War to use their AIs for mass surveillance and autonomous weapons.

A few hours later, Hegseth and Sam Altman declared an agreement-in-principle for OpenAI's models to be used in the niche vacated by Anthropic. Altman stated that he had received guarantees that OpenAI's models wouldn't be used for mass surveillance or autonomous weapons either, but given Hegseth's unwillingness to concede these points with Anthropic, observers speculated that the safeguards in Altman's contract must be weaker or, in a worst-case scenario, completely toothless.

The debate centers on the Department of War's demand that AIs be permitted for "all lawful use". Anthropic worried that mass surveillance and autonomous weaponry would de facto fall in this category; Hegseth and Altman have tried to reassure the public that they won't, and the parts of their agreement that have leaked to the public cite the statutes that Altman expects to constrain this category. Altman's initial statement seemed to suggest additional prohibitions, but on a closer read, provide little tangible evidence of meaningful further restrictions.

Some alert ACX readers1 have done a deep dive into national security law to try to untangle the situation. Their conclusion mirrors that of Anthropic and the majority of Twitter commenters: this is not enough. Current laws against domestic mass surveillance and autonomous weapons have wide loopholes in practice. Further, many of the rules which do exist can be changed by the Department of War at any time. Although OpenAI's national security lead said that "we intended [the phrase 'all lawful use'] to mean [according to the law] at the time the contract is signed', this is not how contract law usually works, and not how the provision is likely to be enforced2. Therefore, these guarantees are not helpful.

To learn more about the details, let's look at the law:

https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you

...more
View all episodesView all episodes
Download on the App Store

Astral Codex Ten PodcastBy Jeremiah

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

129 ratings


More shows like Astral Codex Ten Podcast

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,246 Listeners

The Partially Examined Life Philosophy Podcast by Mark Linsenmayer, Wes Alwan, Seth Paskin, Dylan Casey

The Partially Examined Life Philosophy Podcast

2,118 Listeners

Very Bad Wizards by Tamler Sommers & David Pizarro

Very Bad Wizards

2,680 Listeners

Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,380 Listeners

EconTalk by Russ Roberts

EconTalk

4,270 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,461 Listeners

The Glenn Show by Glenn Loury

The Glenn Show

2,267 Listeners

The Good Fight by Yascha Mounk

The Good Fight

907 Listeners

ChinaTalk by Jordan Schneider

ChinaTalk

291 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,167 Listeners

Your Undivided Attention by The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Your Undivided Attention

1,635 Listeners

Last Week in AI by Skynet Today

Last Week in AI

313 Listeners

Blocked and Reported by Katie Herzog and Jesse Singal

Blocked and Reported

3,833 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

551 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

688 Listeners