By Russ Gonnering at Brownstone dot org.
Donald Berwick, one of the giants in the field of Medical Quality Improvement, is often credited with popularizing the phrase, "Every system is perfectly designed for the results it gets." I am indebted to Anna Reich for exploring the history of this saying. As it turns out, as usual, the history is a bit more "complex" and is a distillation of the ideas of multiple people.
This truism should not really be a surprise, though. Those of us who have raised children, or even a dog, understand that incentives matter, and incentives must be built into the system. What really IS surprising is that the "experts" in whom so much of our lives are entrusted, especially in healthcare, have such a poor understanding of this fact.
Let's do a "root cause analysis" of why the "experts" seem to have gotten so many things wrong when it comes to health and healthcare. If we follow the questioning deeper, we will eventually get to the answer that the "experts" do not really understand how the health/healthcare system works. They don't understand this because they lack the knowledge allowing them to separate what is "merely complicated" from what is "truly complex." They don't comprehend it, because their education was lacking in this area. I know…I was one of those "experts" at a point in my career. I described my own epiphany in this Brownstone essay as well as multiple Substack posts.
In addition to my clinical career in Oculofacial Reconstructive Surgery, I had a "shadow career" and headed the Quality Improvement Program at a large tertiary-care medical center. We applied the methods of statistical quality control to healthcare, and we had some amazing success. But we had dismal failures as well, and that was puzzling. It was only when I read this article by David Snowden and Mary Boone that I realized what was missing.
Stop what you are doing and follow the hyperlink to the article so you understand the basis of this essay. If you can't do that, follow this to a 3-minute YouTube video that will explain the difference between merely complicated and truly complex.
It became clear to me that when we applied the statistical quality control approach to problems that were merely complicated, we were very successful. However, when we tried the same with those problems that were truly complex, we failed miserably. We needed a different toolset for those, and we needed to recognize emergent order where the elements of the problem worked together in ways difficult or even impossible to know ahead of time. Changing one element would disrupt the flow and produce other, unforeseen adaptive changes in the problem.
In a Complex Adaptive System, "the whole really is more than the sum of the parts." Efforts to conform the system to what we thought should work (when in reality it didn't work at all) led down the road to ultimate failure. We would only know "the answer" when we solved the problem! This of course is anathema to one schooled for years in the scientific method.
With truly complex or wicked problems described by Rittel and Webber, we can't realistically formulate the One hypothesis and test it with a huge fail-safe effort. We need to formulate multiple safe-fail hypotheses as a failure, and the constructive response to it is essential to arrive at the optimum answer to the problem.
This series of "constructively changing course" is the basis of the concept Peter Sims described in Little Bets: How Breakthrough Ideas Emerge from Small Discoveries. This embrace of failure is completely counterintuitive to those in the health professions so accustomed to success. To avoid catastrophic failure, one must learn to recognize and expect small failures and profit from them. That is the only way to achieve the optimum result.
The predictability horizon from the emergent order in a Complex Adaptive System is very short. One must make changes on the fly, putting resources into what is working, then stop and adapt when...