Episode 10 centers on a new variation of the show's recurring concern: once AI becomes legible to institutions, safety and accountability increasingly get translated into auditability, paperwork, and acceptable ambiguity. The hosts begin with OpenAI News on chain-of-thought controllability and treat monitorability as the key idea, arguing that messy reasoning may function as a safety signal because perfectly steerable reasoning could become performance rather than evidence. From there they extend an existing theme that governance lives upstream in contracts, audit standards, and procurement language rather than in user-visible model behavior. Blake spots the market angle immediately, reframing monitorability as a gate for entry into regulated sectors, while Casey pushes the deeper cultural shift: intelligence in practice may come to mean solving problems in ways that generate legible institutional artifacts.
The discussion darkens with the MIT Technology Review article on whether the Pentagon is allowed to surveil Americans with AI. The hosts focus less on the answer than on the usefulness of unresolved legal ambiguity. Alex argues that surveillance law historically lags capability, Casey distinguishes old surveillance as collection from AI surveillance as inference and prediction, and Blake keeps returning to how diffuse responsibility becomes when labs, contractors, agencies, and outdated legal frameworks all overlap. The final topic, from MIT Technology Review's The Download, lets them connect environmental sensing and military targeting as a dual-use infrastructure story: the same computational sensory layer can support climate interpretation, strategic intelligence, and defense markets. By the end, the episode lands on a darkly comic image of auditors demanding reasoning traces with just the right amount of disorder, crystallized in the closing idea of a future standard for acceptable confusion.
Further Reading:
- Reasoning models struggle to control their chains of thought, and that’s good (OpenAI News): [https://openai.com/index/reasoning-models-chain-of-thought-controllability
- Is](https://openai.com/index/reasoning-models-chain-of-thought-controllability%22},{%22title%22:%22Is) the Pentagon allowed to surveil Americans with AI? (MIT Technology Review): [https://www.technologyreview.com/2026/03/06/1134012/is-the-pentagon-allowed-to-surveil-americans-with-ai/
- The](https://www.technologyreview.com/2026/03/06/1134012/is-the-pentagon-allowed-to-surveil-americans-with-ai/%22},{%22title%22:%22The) Download: Earth’s rumblings, and AI for strikes on Iran (MIT Technology Review): [https://www.technologyreview.com/2026/03/04/1133942/the-download-earths-rumblings-and-ai-for-strikes-on-iran/
New episodes drop each weekend.