Views Expressed Podcast

AI in WarGames


Listen Later

I’ve been trying to introduce my kids to classic movies that I don’t think they’ll get exposed to anywhere else. (And yes, at this point, I consider the movies of my own childhood to be “classic”). This week, we watched WarGames.

It holds up!

Views Expressed is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

If you’re not familiar with the film, a young Matthew Broderick plays David Lightman, a high school student with an affinity for computers. After hacking into his school’s computer to change his grades, he tries to hack into a computer company to get a sneak peak of a new game. The harmless hacking he intends is overcome when, at the height of the Cold War, he inadvertently breaks into the computer that controls the North American Aerospace Defense Command’s (NORAD’s) nuclear missile operations.

The film has all the 80s nostalgia you could want: five-inch floppy discs, dial-up modems, and monochrome monitors. I should mention for non-military readers that much of the film’s portrayal of the military has far more Hollywood in it than reality. The fictional General Beringer—a clear caricature of the historical General Curtis LeMay—is clownishly aggressive in his response to perceived Soviet escalation. But we can look past these shortcomings and focus on the tech.

Setting aside my quibbles with the military accuracy, the movie’s appeals to artificial intelligence, especially given the period in AI history in which it falls, hold up surprisingly well.

As the movie opens, the Air Force runs an exercise to determine whether the officers responsible for releasing the world-ending barrage of nuclear missiles from silos across the western United States will really be willing to turn turn their keys.

In the movie, several crew members in their bunkers deep underground—unaware that the task is only an exercise and thinking instead that they are inaugurating the end of the world—refuse to follow the orders. In the immediate aftermath, the film shows us one meeting in which some government officials advocate—using words that hit our modern ears in a very familiar way—for “taking the human out of the loop.”

This brief scene at the top of the movie asks viewers to consider two questions. First, if what matters from a command and control perspective is the President’s order to fire nuclear weapons, then why interpose an additional layer of human conscience between the President and the nuclear weapons? If that fateful day were to come, the Air Force lieutenants and captains buried deep in the ground would know only that the President had ordered them to turn their keys and launch their missiles. They would have no awareness of world events and they would have no special insights. They certainly would not have access to all the intelligence to which the President of the United States has access. So, why put a human in that command chain at all?

But then the brief scene asks viewers to consider a second question even more disturbing than the first: What is it that informs the President’s decision to release nuclear weapons if not merely computer outputs. If it is radars, not people, that sense the Soviet missile launches and if it is computers, not people, that calculate the missile trajectories and determine that the US is under a nuclear attack, then isn’t the President merely implementing a decision made by a computer, rather than implementing his own decision?

As the government and military officials contemplate this future of computer command and control, there seems to be a shared consensus that the system is trustworthy and reliable and, perhaps most importantly, that its behavior is predictable.

The high school student, David Lightman (Broderick), is one of those 1980s movie kids whose poor grades in school belie a hidden genius. It is Broderick’s character, rather than the military generals or the computer technicians at NORAD, who realizes that the computer (affectionately called “Joshua”) is not a deterministic software system, but a learning system.

As the film reaches its dramatic climax, Lightman has Joshua play tic tac toe against itself thousands of times. As the pace of these tic tac toe games increases, the lights and screens in the command center flicker. Someone offscreen says, “it must be caught in a loop. It’s drawing more and more power from the rest of the system.”

Joshua learns. This is clearly the point director John Badham and writers, Lawrence Lasker and Walter F. Parkes, want their viewers to notice, and it is especially interesting given where WarGames falls in the history of AI. In 1983, Badham wants his viewers to believe—or at least to imagine—that machine learning is possible and that, if it ever works, it could become very capable.

Ok, here come the spoilers. If you’ve been waiting to watch WarGames for yourself and you just haven’t had a chance yet in the 42 years since its release, skip down a couple of paragraphs.

After exposing Joshua to many games of tic tac toe, Joshua somehow realizes (or knows, or intuits) that it should also expose itself to many games of global thermonuclear war. Again, the screens begin to show iterations of the simulation—faster and faster. We don’t know how many iterations it experiences in this modeling and simulation environment—maybe thousands; maybe millions.

Then the screens in the command center go black and in a scene that has resonated through decades of popular culture, the computer, Joshua, says of global thermonuclear war, “A strange game. The only winning move is not to play.”

There is something profound in this 1983 Cold War techno-thriller. I don’t mean the conclusion that there is no winner in a nuclear exchange between the US and the Soviet Union. That’s interesting, too, but Badham didn’t need to invoke AI to make that point. I’m after something different:

The film makes a clear statement about machine learning at a time when machine learning was, not just out of fashion, but outside of public consciousness.

In 1983, the widely (though not universally) shared view among AI researchers was that using artificial neural networks to create machines that could learn had been a clever idea in the 1950s, but it had been tried and it had failed. Frank Rosenblatt’s contribution to the field in 1958 was a neural network called the “Perceptron” capable of learning—without being specifically programmed—to recognize simple patterns in computer punch cards.

60 years ago, Rosenblatt proved that machines could learn.

The development of neural networks for machine learning stalled after that, though. By the end of the next decade, Marvin Minsky and Seymour Papert published a book that sought to show that neural networks would never be able to achieve successes beyond the kinds of science projects of relatively limited utility that Rosenblatt had produced. The perceptron was all but dead.

The AI boom that did arise in the 1980s and continued into the 1990s—a period I wrote about last week—had almost nothing to do with perceptrons, machine learning, or neural networks. That period of AI development was focused on deterministic systems, sometimes called “expert systems.”

This distinction brings us back to the film. Right at the top, the readiness exercise in which so many nuclear missile officers fail to turn the keys in their silos led some fictional government officials around the conference table to say, perhaps we should take the human “out of the loop.” The assumption, it seems to me, is that, since the computer is going to make an assessment as to whether the US is under nuclear attack, perhaps the computer should be responsible for initiating the response. And if all that is relevant to this kind of decision-making is if/then logic trees, then a computer will probably perform better at this kind of thing than a human can perform.

(I should note that even in our own time of rapid AI advancement, there is only one reference anywhere in US military regulations, doctrine, or policy that says it will always keep a “human in the loop” and that is in the 2022 Nuclear Posture Review which says that

The United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.

One of the themes in the film is that these professionals sitting (and smoking) in dark, windowless rooms at NORAD don’t realize what the computer is capable of. Only David Lightman, high schooler extraordinaire, has the insight to realize that, unlike ordinary computer systems of the time, Joshua is not merely executing if/then logic scripts, but it is instead learning.

In modern terms, Joshua was trained using reinforcement learning in a modeling and simulation ("mod & sim”) environment.

It is this nuance—alongside the 1980s nostalgia and high school-level hijinkx—that made the movie such a joy to watch in 2025. What John Badham was asking his audience to consider back in 1983 was a distinction that didn’t really enter the public consciousness until more than 30 years later. It is commonplace now for ordinary people to think about how machines that can learn might be relevant to their businesses or to their academic work or to the devices their children use. But the fictional David Lightman saw it back in 1983.

Wittingly or not, Badham even included dramatic details that can serve as harbingers for our own time. When Joshua is iterating on games of global thermonuclear war, it drains power from the rest of the building—that’s why the lights flicker. Electrical power has proven to be one of the most important enablers of modern AI. According to some predictions, by 2027, the electricity required to train the largest machine learning models could rival the power required to power a small country for a year.

With the benefit of hindsight, we can also see some of the hype around AI foreshadowed in the film. The whole crux of the plot at the end is that, by learning from thousands of iterations of the game, the computer can reach a profound conclusion about our world that may elude even the world’s leading experts: that nuclear war is unwinnable.

I don’t want to start a debate about nuclear war (which raises a whole basket of questions about ethics, psychology, and the limitations of human rationality). All I want to point out is the those who are bullish on the 21st century version of machine learning believe that, by exposing complex neural networks to massive datasets that pertain to our world, perhaps the AI will reveal something profound to us about our world that we could not have discovered otherwise.

I’m skeptical on this point. Large models trained on massive training datasets using hitherto unfathomable computing power seem to me, at best, only to approach human wisdom asymptotically, rather than to exceed it. But perhaps the machines will prove me wrong in time.

In any event, Badham, Lawrence, and Parkes wanted us to wrestle with these questions in 1983, decades before the technology was mature enough that we felt compelled to take them seriously.

It’s easy to see why it took us more than 30 years to see what WarGames was trying to show us. That’s because it took more than 30 years for neural networks to become capable of the kind of learning Joshua could accomplish. Rosenblatt showed that machine learning was possible, but Rosenblatt was able to create only a single layer of neurons. It would take decades for AI researchers to learn the mathematical techniques that would allow them to train multiple layers of a neural network at the same time. Those techniques would ultimately unlock new possibilities for neural networks—possibilities that have now given us capabilities with which we interact every day.

Other films seem to have cemented themselves in popular conceptions of AI. I’m thinking especially of Ridley Scott’s Blade Runner (1982) and James Cameron’s Terminator (1984). But the humanoid uprising depicted in those films hasn’t happened yet and if that future does await us, it doesn’t seem to be arriving any time soon. But sandwiched between these two films in 1983, WarGames offers both tech predictions that turn out to have been right and social commentary about AI in the military decision-making context that is every bit as relevant today as it was then.

Credit where it’s due

The Views Expressed are those of the author and do not necessarily reflect those of the US Air Force, the Department of Defense, or any other part of the US Government.

Views Expressed is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.



This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit chapainsights.substack.com/subscribe
...more
View all episodesView all episodes
Download on the App Store

Views Expressed PodcastBy Joseph Chapa