
Sign up to save your podcasts
Or


Walter Haydock draws a direct line from military risk management to the enterprise AI challenge. His argues that organizations need to stop doing "math with colors," and move toward quantitative assessment that assigns dollar values to potential AI failures. Much of the conversation in this episode focuses on ISO 42001, the global standard for AI management systems, which Haydock has championed and which his own firm has gone through. He draws a three-part taxonomy of AI governance frameworks: legislation you either comply with or don't, voluntary self-attestable frameworks like the NIST AI RMF, and externally certifiable standards like ISO 42001 that bring independent verification. Haydock outlines a forward-looking vision in which certification, insurance, and legal safe harbors reinforce one another. Machine-readable audit data will eventually allow insurers to make informed underwriting decisions about AI risk, reducing uncertainty for both enterprises and their customers. Though, as he acknowledges, we are still far from that environment, with AI audits today still roughly 90% manual.
Walter Haydock is the founder of StackAware, which helps AI-powered companies manage security, compliance, and privacy risk. Before entering the private sector, he served as a reconnaissance and intelligence officer in the U.S. Marine Corps, as a professional staff member for the Homeland Security Committee of the U.S. House of Representatives, and as an analyst at the National Counterterrorism Center. He is a graduate of the United States Naval Academy, Georgetown University's School of Foreign Service, and Harvard Business School.
Transcript Deploy Securely (Haydock's Substack)
By Kevin Werbach5
2424 ratings
Walter Haydock draws a direct line from military risk management to the enterprise AI challenge. His argues that organizations need to stop doing "math with colors," and move toward quantitative assessment that assigns dollar values to potential AI failures. Much of the conversation in this episode focuses on ISO 42001, the global standard for AI management systems, which Haydock has championed and which his own firm has gone through. He draws a three-part taxonomy of AI governance frameworks: legislation you either comply with or don't, voluntary self-attestable frameworks like the NIST AI RMF, and externally certifiable standards like ISO 42001 that bring independent verification. Haydock outlines a forward-looking vision in which certification, insurance, and legal safe harbors reinforce one another. Machine-readable audit data will eventually allow insurers to make informed underwriting decisions about AI risk, reducing uncertainty for both enterprises and their customers. Though, as he acknowledges, we are still far from that environment, with AI audits today still roughly 90% manual.
Walter Haydock is the founder of StackAware, which helps AI-powered companies manage security, compliance, and privacy risk. Before entering the private sector, he served as a reconnaissance and intelligence officer in the U.S. Marine Corps, as a professional staff member for the Homeland Security Committee of the U.S. House of Representatives, and as an analyst at the National Counterterrorism Center. He is a graduate of the United States Naval Academy, Georgetown University's School of Foreign Service, and Harvard Business School.
Transcript Deploy Securely (Haydock's Substack)

4,420 Listeners

113,121 Listeners

56,944 Listeners

212 Listeners

7,244 Listeners

1,635 Listeners

61 Listeners

551 Listeners

512 Listeners

5,576 Listeners

16,525 Listeners

11,013 Listeners

688 Listeners

46 Listeners

40 Listeners