
Sign up to save your podcasts
Or


OpenAI — once considered an oxymoron given its closed-source practices — recently released GPT-OSS, the company's first open language model in half a decade. The model fulfills an earlier pledge to again release “strong” open models that developers can freely modify and deploy. OpenAI approved GPT-OSS in part because the model sits behind the closed-source frontier, including its own GPT-5, which it released just two days later.
Meanwhile, Meta — long a champion of frontier open models — has delayed the release of its largest open model, Llama Behemoth, and suggested it may keep its future “superintelligence” models behind paywalls. Meta, which once described open source AI as a way to “control our own destiny,” now cites “novel safety concerns” as a reason to withhold its most capable models.
These decisions mark a dramatic pivot for both companies, and reveal how different AI firms are converging on an [...]
---
Outline:
(02:02) Uncertainty Is Driving Precautionary Policy
(04:37) Precaution Disproportionately Chills Open Source
(06:48) Restrictions Demand Confident Evidence
(09:03) Precaution May Lead to Digital Feudalism
(12:54) We Need to Learn to Live with Uncertainty
(15:24) We Should Promote, Not Deter, Openness at the Frontier
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By Center for AI SafetyOpenAI — once considered an oxymoron given its closed-source practices — recently released GPT-OSS, the company's first open language model in half a decade. The model fulfills an earlier pledge to again release “strong” open models that developers can freely modify and deploy. OpenAI approved GPT-OSS in part because the model sits behind the closed-source frontier, including its own GPT-5, which it released just two days later.
Meanwhile, Meta — long a champion of frontier open models — has delayed the release of its largest open model, Llama Behemoth, and suggested it may keep its future “superintelligence” models behind paywalls. Meta, which once described open source AI as a way to “control our own destiny,” now cites “novel safety concerns” as a reason to withhold its most capable models.
These decisions mark a dramatic pivot for both companies, and reveal how different AI firms are converging on an [...]
---
Outline:
(02:02) Uncertainty Is Driving Precautionary Policy
(04:37) Precaution Disproportionately Chills Open Source
(06:48) Restrictions Demand Confident Evidence
(09:03) Precaution May Lead to Digital Feudalism
(12:54) We Need to Learn to Live with Uncertainty
(15:24) We Should Promote, Not Deter, Openness at the Frontier
---
First published:
Source:
---
Narrated by TYPE III AUDIO.