The war in Iran has revealed a growing rift between Silicon Valley and the state. While companies like Anthropic have attempted to set ethical "red lines" regarding autonomous lethality—resulting in them being designated a "supply chain risk" and excluded from contracts—others, like Sam Altman of OpenAI, have "rushed in" to fill the vacuum. This race to the bottom suggests that ethics are being treated as an obstacle to be bypassed by the highest bidder.
In January 2026, the Department of War released its "AI Warfare Fighter Strategy," making "maximum lethality" official doctrine. We are now standing at the threshold of a future where swarms of cheap drones autonomously hunt individuals based on statistical likelihoods. The question we must face is no longer whether we can automate war, but whether we can survive the loss of the human "friction" that once kept our most violent impulses in check. Are we truly ready for a world where the code of the machine outpaces the conscience of the man?
Become a supporter of this podcast: https://www.spreaker.com/podcast/shoulder-roll-virtual-boxing--6207307/support.
This episode includes AI-generated content.