
Sign up to save your podcasts
Or


In this episode, we take a look at the transition from "vibe-coding" to shipping verifiable, production-grade AI applications. This is the critical shift from relying on "gut feel" and prompt tinkering to implementing rigorous audit trails, versioning, and security controls. We discuss why many AI pilots fail due to a lack of explainability, the specific risks of privilege expansion and data leakage in agentic workflows, and also outline how to use Microsoft Foundry/Azure AI Studio to operationalize your models like true regulated software.
(00:00) - Intro and catching up.
(05:30) - Show content starts.
Show links
- RedAmon (GitHub) for automated agentic offensive security
- Give us feedback!
By Tobias Zimmergren, Jussi Roine4.7
1212 ratings
In this episode, we take a look at the transition from "vibe-coding" to shipping verifiable, production-grade AI applications. This is the critical shift from relying on "gut feel" and prompt tinkering to implementing rigorous audit trails, versioning, and security controls. We discuss why many AI pilots fail due to a lack of explainability, the specific risks of privilege expansion and data leakage in agentic workflows, and also outline how to use Microsoft Foundry/Azure AI Studio to operationalize your models like true regulated software.
(00:00) - Intro and catching up.
(05:30) - Show content starts.
Show links
- RedAmon (GitHub) for automated agentic offensive security
- Give us feedback!

1,967 Listeners

9,765 Listeners

2,009 Listeners

373 Listeners

83 Listeners

1,025 Listeners

308 Listeners

178 Listeners

5,659 Listeners

10,233 Listeners

311 Listeners

24 Listeners

140 Listeners

5,595 Listeners

1,476 Listeners