
Sign up to save your podcasts
Or


The New Yorker dropped a year-and-a-half investigation into Sam Altman this week — and it answers a lot of questions I've had since November 2023, when his board fired him and then reinstated him in 72 hours. Reporters Andrew Marantz and Ronan Farrow got access to internal documents and sources inside OpenAI, including records kept by Ilya Sutskever and Dario Amodei, and the portrait they paint is of a leader who tells employees exactly what they need to hear to stay committed — and then behaves very differently behind closed doors. The safety team was promised 20% of compute. They got less than 2%.What makes this more than a profile of a difficult boss is that Altman isn't running a normal company. He's at the center of decisions that will affect jobs, creativity, and the structure of economic life for the next generation — and there is currently no meaningful regulation, no democratic input, and no market mechanism capable of holding him accountable. The same day the New Yorker piece published, OpenAI released a 13-page document about industrial policy and the future of AI — full of ideas like adaptive safety nets and efficiency dividends that the company has never once lobbied for. I walk through why that timing matters, what the document actually says, and what the combination of these two releases tells us about how power really works in AI.This is a story about who makes decisions, who profits, and who pays — and why journalism like the New Yorker's investigation may be one of the only tools we have left.Originally published at The Rip Current. Paid subscribers get early access, exclusive analysis + full transcripts.
By Jacob Ward5
2424 ratings
The New Yorker dropped a year-and-a-half investigation into Sam Altman this week — and it answers a lot of questions I've had since November 2023, when his board fired him and then reinstated him in 72 hours. Reporters Andrew Marantz and Ronan Farrow got access to internal documents and sources inside OpenAI, including records kept by Ilya Sutskever and Dario Amodei, and the portrait they paint is of a leader who tells employees exactly what they need to hear to stay committed — and then behaves very differently behind closed doors. The safety team was promised 20% of compute. They got less than 2%.What makes this more than a profile of a difficult boss is that Altman isn't running a normal company. He's at the center of decisions that will affect jobs, creativity, and the structure of economic life for the next generation — and there is currently no meaningful regulation, no democratic input, and no market mechanism capable of holding him accountable. The same day the New Yorker piece published, OpenAI released a 13-page document about industrial policy and the future of AI — full of ideas like adaptive safety nets and efficiency dividends that the company has never once lobbied for. I walk through why that timing matters, what the document actually says, and what the combination of these two releases tells us about how power really works in AI.This is a story about who makes decisions, who profits, and who pays — and why journalism like the New Yorker's investigation may be one of the only tools we have left.Originally published at The Rip Current. Paid subscribers get early access, exclusive analysis + full transcripts.

38,430 Listeners

6,881 Listeners

9,238 Listeners

4,113 Listeners

5,130 Listeners

12,258 Listeners

544 Listeners

6,467 Listeners

2,031 Listeners

6,304 Listeners

113,121 Listeners

9,475 Listeners

2,867 Listeners

16,525 Listeners