
Sign up to save your podcasts
Or
Dive deep into the nuances of measuring the quality of Large Language Model (LLM) prompts, as we explore past methodologies, and evaluate both qualitative assessments and large-scale testing techniques. Join us as we discuss the challenges of traditional metrics, the role of user feedback, and brainstorm new ways to gauge generative model performance.
—
Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brads AI Consultancy at bradleyarsenault.me.
Add Justin Macorin and Bradley Arsenault on LinkedIn.
Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link
Hosted by Ausha. See ausha.co/privacy-policy for more information.
Dive deep into the nuances of measuring the quality of Large Language Model (LLM) prompts, as we explore past methodologies, and evaluate both qualitative assessments and large-scale testing techniques. Join us as we discuss the challenges of traditional metrics, the role of user feedback, and brainstorm new ways to gauge generative model performance.
—
Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brads AI Consultancy at bradleyarsenault.me.
Add Justin Macorin and Bradley Arsenault on LinkedIn.
Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link
Hosted by Ausha. See ausha.co/privacy-policy for more information.