
Sign up to save your podcasts
Or


✉️ Stay Updated With 2nd Order Thinkers: https://www.2ndorderthinkers.com/
I translate the latest AI research into plain English and answer your most challenging questions so you can build your own informed view on AI.
+++
Executives keep saying they “understand AI risk.” The evidence disagrees. This episode is the antidote: a plain-English tour of MIT’s AI Risk Repository—a living map of 1,600+ failure modes across 65 frameworks—so you stop guessing and start checking.
In this episode, we:
Decode MIT’s two-part taxonomy (who caused the harm, whether it was intentional, and when it appears) and why most failures surface after deployment
Turn the chaos of “65 frameworks” into one usable language for leaders, not vendors
Walk through 2025 failures you’ll actually recognize (healthcare models missing critical deterioration; AI-scaled extortion and employment scams)
Map a pragmatic playbook: pick the one domain that could sink you, shortlist five visible and expensive risks, and write the narrative that gets your team and board to act
📖 Read the full article here: https://www.2ndorderthinkers.com/p/ai-risk-isnt-a-tech-problem-but-a
👍 If you enjoyed this episode:
Like & Subscribe to get future deep dives without the hype
Comment: What’s the one AI risk that could actually hurt your org next quarter?
Share it with the person whose reputation depends on AI working🔗 Connect with me on LinkedIn: https://www.linkedin.com/in/jing--hu/Stay curious, stay skeptical 🧠
By Jing Hu✉️ Stay Updated With 2nd Order Thinkers: https://www.2ndorderthinkers.com/
I translate the latest AI research into plain English and answer your most challenging questions so you can build your own informed view on AI.
+++
Executives keep saying they “understand AI risk.” The evidence disagrees. This episode is the antidote: a plain-English tour of MIT’s AI Risk Repository—a living map of 1,600+ failure modes across 65 frameworks—so you stop guessing and start checking.
In this episode, we:
Decode MIT’s two-part taxonomy (who caused the harm, whether it was intentional, and when it appears) and why most failures surface after deployment
Turn the chaos of “65 frameworks” into one usable language for leaders, not vendors
Walk through 2025 failures you’ll actually recognize (healthcare models missing critical deterioration; AI-scaled extortion and employment scams)
Map a pragmatic playbook: pick the one domain that could sink you, shortlist five visible and expensive risks, and write the narrative that gets your team and board to act
📖 Read the full article here: https://www.2ndorderthinkers.com/p/ai-risk-isnt-a-tech-problem-but-a
👍 If you enjoyed this episode:
Like & Subscribe to get future deep dives without the hype
Comment: What’s the one AI risk that could actually hurt your org next quarter?
Share it with the person whose reputation depends on AI working🔗 Connect with me on LinkedIn: https://www.linkedin.com/in/jing--hu/Stay curious, stay skeptical 🧠