
Sign up to save your podcasts
Or


AI can still sometimes hallucinate and give less than optimal answers. To address this, we are joined by Arize AI’s Co-Founder a Aparna Dhinakaran for a discussion on Observability and Evaluation for AI. We begin by discussing the challenges AI Observability and Evaluation. For example, how does “LLM as a Judge” work? We conclude with some valuable advice from Aparna for first time entrepreneurs.
Begin Observing and Evaluating your AI Applications with Open Source Phoenix:
https://phoenix.arize.com/
AWS Hosts: Nolan Chen & Malini Chatterjee
Email Your Feedback: [email protected]
By AWS re:Think5
99 ratings
AI can still sometimes hallucinate and give less than optimal answers. To address this, we are joined by Arize AI’s Co-Founder a Aparna Dhinakaran for a discussion on Observability and Evaluation for AI. We begin by discussing the challenges AI Observability and Evaluation. For example, how does “LLM as a Judge” work? We conclude with some valuable advice from Aparna for first time entrepreneurs.
Begin Observing and Evaluating your AI Applications with Open Source Phoenix:
https://phoenix.arize.com/
AWS Hosts: Nolan Chen & Malini Chatterjee
Email Your Feedback: [email protected]

202 Listeners

6,444 Listeners

17 Listeners