
Sign up to save your podcasts
Or


The US government asked Anthropic — the company behind Claude, one of the most capable AI coding systems on the market — to help build autonomous weapons and a mass surveillance infrastructure. Anthropic said no.
That refusal, which happened the same week the US launched strikes on Iran, is either the most principled corporate decision in recent AI history or the beginning of a very ugly fight over who controls the most powerful tools ever built.
Jeremy and Jason break down what the government actually asked for, why Anthropic refused, what Open AI and Elon Musk did instead, and what it means for all of us when the people writing the guardrails are the same people being pressured to remove them.
Topics Discussed:
Chapters
MORE FROM BROBOTS:
Get the Newsletter!
By Jeremy Grater, Jason Haworth5
8686 ratings
The US government asked Anthropic — the company behind Claude, one of the most capable AI coding systems on the market — to help build autonomous weapons and a mass surveillance infrastructure. Anthropic said no.
That refusal, which happened the same week the US launched strikes on Iran, is either the most principled corporate decision in recent AI history or the beginning of a very ugly fight over who controls the most powerful tools ever built.
Jeremy and Jason break down what the government actually asked for, why Anthropic refused, what Open AI and Elon Musk did instead, and what it means for all of us when the people writing the guardrails are the same people being pressured to remove them.
Topics Discussed:
Chapters
MORE FROM BROBOTS:
Get the Newsletter!

12,127 Listeners

6,120 Listeners