
Sign up to save your podcasts
Or


This podcast is brought to you by Outcomes Rocket, your exclusive healthcare marketing agency. Learn how to accelerate your growth by going to outcomesrocket.com
AI security is no longer optional; it's the foundation that determines whether innovation in healthcare will thrive or fail.
In this episode, Steve Wilson, Chief AI & Product Officer for Exabeam and author, discusses the hidden vulnerabilities inside modern AI systems, why traditional software assumptions break down, and how healthcare must rethink safety, trust, and security from the ground up. He explains the risks of prompt injection and indirect prompt injection, highlights the fragile nature of AI “intuition,” and compares securing AI to training unpredictable employees rather than testing deterministic code. Steve also explores issues such as supply chain integrity, output filtering, trust boundaries, and the growing need for continuous evaluation rather than one-time testing. Finally, he shares stories from his early career at Sun Microsystems, Java’s early days, startup lessons from the 90s, and how modern AI agents are reshaping cybersecurity operations.
Tune in and learn how today’s most advanced AI systems can be both powerful and dangerously gullible, and what it takes to secure them!
Resources
Connect with and follow Steve Wilson on LinkedIn.
Follow Exabeam on LinkedIn and visit their website!
Buy Steve Wilson’s book The Developer's Playbook for Large Language Model Security here.
By Saul Marquez4.9
112112 ratings
This podcast is brought to you by Outcomes Rocket, your exclusive healthcare marketing agency. Learn how to accelerate your growth by going to outcomesrocket.com
AI security is no longer optional; it's the foundation that determines whether innovation in healthcare will thrive or fail.
In this episode, Steve Wilson, Chief AI & Product Officer for Exabeam and author, discusses the hidden vulnerabilities inside modern AI systems, why traditional software assumptions break down, and how healthcare must rethink safety, trust, and security from the ground up. He explains the risks of prompt injection and indirect prompt injection, highlights the fragile nature of AI “intuition,” and compares securing AI to training unpredictable employees rather than testing deterministic code. Steve also explores issues such as supply chain integrity, output filtering, trust boundaries, and the growing need for continuous evaluation rather than one-time testing. Finally, he shares stories from his early career at Sun Microsystems, Java’s early days, startup lessons from the 90s, and how modern AI agents are reshaping cybersecurity operations.
Tune in and learn how today’s most advanced AI systems can be both powerful and dangerously gullible, and what it takes to secure them!
Resources
Connect with and follow Steve Wilson on LinkedIn.
Follow Exabeam on LinkedIn and visit their website!
Buy Steve Wilson’s book The Developer's Playbook for Large Language Model Security here.

90,955 Listeners

21,957 Listeners

43,907 Listeners

38,484 Listeners

9,517 Listeners

163 Listeners

111,888 Listeners

493 Listeners

28,407 Listeners

197 Listeners

2,649 Listeners

3,017 Listeners

5,576 Listeners

29,193 Listeners

61 Listeners