
Sign up to save your podcasts
Or


In this episode, Kim Jones sits down with Eric Nagel, a former CISO with a rare blend of engineering, legal, and patent expertise, to unpack what responsible AI really looks like inside a modern enterprise. Eric breaks down the difference between traditional machine learning and generative AI, why nondeterministic outputs can be both powerful and risky, and how issues like bias, hallucinations, and data leakage demand new safeguards—including AI firewalls.
He also discusses what smaller organizations can do to manage AI risk, how tools like code-generation models change expectations for developers, and the evolving regulatory landscape shaping how companies must deploy AI responsibly.
Want more CISO Perspectives?
Check out a companion blog post by our very own Ethan Cook, where he breaks down key insights, shares behind-the-scenes context, and highlights research that complements this episode.
By N2K Networks5
1515 ratings
In this episode, Kim Jones sits down with Eric Nagel, a former CISO with a rare blend of engineering, legal, and patent expertise, to unpack what responsible AI really looks like inside a modern enterprise. Eric breaks down the difference between traditional machine learning and generative AI, why nondeterministic outputs can be both powerful and risky, and how issues like bias, hallucinations, and data leakage demand new safeguards—including AI firewalls.
He also discusses what smaller organizations can do to manage AI risk, how tools like code-generation models change expectations for developers, and the evolving regulatory landscape shaping how companies must deploy AI responsibly.
Want more CISO Perspectives?
Check out a companion blog post by our very own Ethan Cook, where he breaks down key insights, shares behind-the-scenes context, and highlights research that complements this episode.

4,420 Listeners

651 Listeners

1,028 Listeners

113,121 Listeners

8,077 Listeners

73 Listeners

5,576 Listeners

3,538 Listeners

1,480 Listeners