John Haggerty brings more than 25 years of product leadership experience at companies like Datasite, Prodege, and Highway.ai. As co-founder and CEO of BiasHawk, John leverages his expertise in product management, behavioral psychology, and AI to develop an AI-powered platform that acts like a behavioral clinical psychologist to diagnose cognitive bias and heuristics in other AI models.
In this episode of Product Momentum, John joins Sean and Dan to explore how AI is reshaping product work while also introducing new risks. John’s message is clear: as AI accelerates execution, product leaders must confront the invisible risks that come with AI and double down on critical thinking, context, and judgment to deliver quality decisionmaking.
AI as an Accelerator, Not a Replacement
AI is dramatically compressing the time required to execute product work. Tasks that once took months can now be completed in hours. As we discover every day, speed does not eliminate the need for thoughtful product management. John argues that it merely shifts where product managers can and should focus their energy.
“As AI expedites the execution process,” John says, “it also allows us to automate the areas of our work where we really need to be involved in cognitive thinking, reasoning, and creativity.”
The Hidden Risk: Bias in AI Decision-Making
Large language models inherit the same cognitive biases found in human thinking, John adds. These biases influence not just outputs, but the reasoning behind decisions we make.
“It’s not what the decision is or what the output is, it’s more about how the AI model arrived at it.” This distinction is critical for product teams. Without understanding how AI arrives at conclusions, teams risk introducing flawed logic into their products, especially in high-stakes areas like hiring, healthcare, and financial management.
Monitoring AI: A New Responsibility for Product Teams
To address these challenges, John launched BiasHawk – an AI platform designed to monitor and evaluate AI systems for cognitive bias. The goal is not just testing outputs, but continuously assessing decision quality over time.
“We all understand that these systems are designed to evolve. They’re designed to change. They’re designed to drift. But who’s monitoring that to make sure that decision quality stays where it’s supposed to be.” As AI continues to evolve, the role of the product manager becomes even more critical — not less so. Execution may be faster, but judgment, context, and ethical responsibility remain firmly within our human domain.
John Haggerty, in his own words:
[06:50] AI is compressing execution time, allowing us to automate some of the tasks that we do as product professionals: cognitive thinking, reasoning, creativity.
[10:22] There’re lots of really good AI tools out there right now, but what there isn’t out there is anything that tests the fairness of our decisionmaking.
[16:04] Great. You’ve used AI to improve productivity by 20%. But what happens when that breaks? What if there’s bias and heuristics in these LLMs. Who’s catching that?
[17:55] Critical AI systems have the same blind spots, the same bad habits, that we as humans have. And why not? They’re built off of the flawed content we created.
[21:41] I don’t think a LLM could ever get depressed. But we have standard behavioral assessments that we could administer to an LLM — to find out where it falls with these biases and with the decision-making process it’s using.
[27:40] As humans, we’re make mistakes. Because AIs are built on what we know, those same mistakes are being repeated. Now we have AI learning from AI, and those mistakes are being amplified.
[30:59] The ‘why’ will always need to come from a human. At the end of it all, that’s what Product is.
The post 185 / Confronting Cognitive Bias in AI Models, with John Haggerty appeared first on ITX Corp..