
Sign up to save your podcasts
Or
Today we’re joined by Shreya Rajpal, founder and CEO of Guardrails AI. In our conversation with Shreya, we discuss ensuring the safety and reliability of language models for production applications. We explore the risks and challenges associated with these models, including different types of hallucinations and other LLM failure modes. We also talk about the susceptibility of the popular retrieval augmented generation (RAG) technique to closed-domain hallucination, and how this challenge can be addressed. We also cover the need for robust evaluation metrics and tooling for building with large language models. Lastly, we explore Guardrails, an open-source project that provides a catalog of validators that run on top of language models to enforce correctness and reliability efficiently.
The complete show notes for this episode can be found at twimlai.com/go/647.
4.7
412412 ratings
Today we’re joined by Shreya Rajpal, founder and CEO of Guardrails AI. In our conversation with Shreya, we discuss ensuring the safety and reliability of language models for production applications. We explore the risks and challenges associated with these models, including different types of hallucinations and other LLM failure modes. We also talk about the susceptibility of the popular retrieval augmented generation (RAG) technique to closed-domain hallucination, and how this challenge can be addressed. We also cover the need for robust evaluation metrics and tooling for building with large language models. Lastly, we explore Guardrails, an open-source project that provides a catalog of validators that run on top of language models to enforce correctness and reliability efficiently.
The complete show notes for this episode can be found at twimlai.com/go/647.
161 Listeners
470 Listeners
295 Listeners
324 Listeners
143 Listeners
190 Listeners
280 Listeners
88 Listeners
101 Listeners
125 Listeners
190 Listeners
63 Listeners
423 Listeners
29 Listeners
37 Listeners