Womansplaining AI

On Feeling Smarter and Being Wrong


Listen Later

We recorded this episode three hours before the Pentagon's 5:01 PM deadline for Anthropic to drop its two remaining safety red lines — no mass domestic surveillance, no fully autonomous weapons — or be designated a supply chain risk alongside Huawei. We break down the standoff, the Orwellian doublethink of calling a company's safety restrictions a national security threat, and what it means that the DOD wanted Anthropic's tools specifically because they're the best.

Then: OpenAI is putting ads in ChatGPT. A former research scientist quit the same day and wrote a New York Times op-ed calling their chat logs "the most detailed record of private human thought ever assembled." We unpack what happens when a sycophantic AI meets an ad revenue engine — and why it's not just about behavior anymore. Facebook targeted you based on what you clicked. ChatGPT will target you based on what you think.

Our main artifact: a Wharton study called "Thinking Fast, Slow, and Artificial." When AI is confidently wrong, people follow it 80% of the time — and their self-reported confidence goes up. We dig into cognitive surrender, algorithmic loafing, and why working with AI activates the same brain centers as gambling. The scariest part isn't that AI gets things wrong. It's that you feel smarter while it's happening.

Also: Mara won't use AI to take out your appendix (she explains why with help from the board game Operation). Your therapist pauses mid-session to recommend Nesquik hot chocolate. We need a German word for the specific rage of being gaslit by your AI at 2 AM. And AI note-takers in meetings make women speak 9% more.

Leave us a voicemail at womansplainingai.com — we want your voice in future episodes!

...more
View all episodesView all episodes
Download on the App Store

Womansplaining AIBy Logan Currie