
Sign up to save your podcasts
Or


Consulting firms have turned AI into a default prescription: lead with a glossy demo, label it "transformation," and let the client's budget absorb the experimentation. But most organizations don't need more complexity—they need clearer processes, cleaner data, stronger controls, and leaders who will make hard operational decisions. When consultants push AI into problems that are fundamentally about incentives, handoffs, training, governance, or basic automation, they create the illusion of progress while postponing the real work.
Worse, AI is rarely "just a tool you add." It brings ongoing costs—data pipelines, integration, security, monitoring, audits, retraining, and change management—that quietly turn a pilot into a permanent spend line. It also shifts risk: privacy exposure, compliance obligations, and accountability gaps when probabilistic systems make high-stakes recommendations. The result is a mismatch between what clients actually need and what gets sold: expensive systems chasing fashionable narratives.
A more responsible consulting posture is simple: start with the outcome, compare non-AI options honestly, quantify lifecycle cost, and only recommend AI when it is clearly the cheapest reliable path to measurable results—then commit to being accountable for those results.
By David LinthicumConsulting firms have turned AI into a default prescription: lead with a glossy demo, label it "transformation," and let the client's budget absorb the experimentation. But most organizations don't need more complexity—they need clearer processes, cleaner data, stronger controls, and leaders who will make hard operational decisions. When consultants push AI into problems that are fundamentally about incentives, handoffs, training, governance, or basic automation, they create the illusion of progress while postponing the real work.
Worse, AI is rarely "just a tool you add." It brings ongoing costs—data pipelines, integration, security, monitoring, audits, retraining, and change management—that quietly turn a pilot into a permanent spend line. It also shifts risk: privacy exposure, compliance obligations, and accountability gaps when probabilistic systems make high-stakes recommendations. The result is a mismatch between what clients actually need and what gets sold: expensive systems chasing fashionable narratives.
A more responsible consulting posture is simple: start with the outcome, compare non-AI options honestly, quantify lifecycle cost, and only recommend AI when it is clearly the cheapest reliable path to measurable results—then commit to being accountable for those results.