The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647

09.18.2023 - By Sam CharringtonPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

Today we’re joined by Shreya Rajpal, founder and CEO of Guardrails AI. In our conversation with Shreya, we discuss ensuring the safety and reliability of language models for production applications. We explore the risks and challenges associated with these models, including different types of hallucinations and other LLM failure modes. We also talk about the susceptibility of the popular retrieval augmented generation (RAG) technique to closed-domain hallucination, and how this challenge can be addressed. We also cover the need for robust evaluation metrics and tooling for building with large language models. Lastly, we explore Guardrails, an open-source project that provides a catalog of validators that run on top of language models to enforce correctness and reliability efficiently.

The complete show notes for this episode can be found at twimlai.com/go/647.

More episodes from The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)