
Sign up to save your podcasts
Or


In this episode, we take a look at the transition from "vibe-coding" to shipping verifiable, production-grade AI applications. This is the critical shift from relying on "gut feel" and prompt tinkering to implementing rigorous audit trails, versioning, and security controls. We discuss why many AI pilots fail due to a lack of explainability, the specific risks of privilege expansion and data leakage in agentic workflows, and also outline how to use Microsoft Foundry/Azure AI Studio to operationalize your models like true regulated software.
(00:00) - Intro and catching up.
(05:30) - Show content starts.
Show links
- RedAmon (GitHub) for automated agentic offensive security
- Give us feedback!
By Tobias Zimmergren, Jussi Roine4.7
1212 ratings
In this episode, we take a look at the transition from "vibe-coding" to shipping verifiable, production-grade AI applications. This is the critical shift from relying on "gut feel" and prompt tinkering to implementing rigorous audit trails, versioning, and security controls. We discuss why many AI pilots fail due to a lack of explainability, the specific risks of privilege expansion and data leakage in agentic workflows, and also outline how to use Microsoft Foundry/Azure AI Studio to operationalize your models like true regulated software.
(00:00) - Intro and catching up.
(05:30) - Show content starts.
Show links
- RedAmon (GitHub) for automated agentic offensive security
- Give us feedback!

1,961 Listeners

9,779 Listeners

2,008 Listeners

373 Listeners

83 Listeners

1,026 Listeners

301 Listeners

177 Listeners

5,673 Listeners

10,222 Listeners

313 Listeners

26 Listeners

139 Listeners

5,553 Listeners

1,470 Listeners