Epikurious

From Bias to Balance: Navigating LLM Evaluations


Listen Later

This research paper explores the challenges of evaluating Large Language Model (LLM) outputs and introduces EvalGen, a new interface designed to improve the alignment between LLM-generated evaluations and human preferences. EvalGen uses a mixed-initiative approach, combining automated LLM assistance with human feedback to generate and refine evaluation criteria and assertions. The study highlights a phenomenon called "criteria drift," where the process of grading outputs helps users define and refine their evaluation criteria. A qualitative user study demonstrates overall support for EvalGen, but also reveals complexities in aligning automated evaluations with human judgment, particularly regarding the subjective nature of evaluation and the iterative process of alignment. The authors conclude by discussing implications for future LLM evaluation assistants.


...more
View all episodesView all episodes
Download on the App Store

EpikuriousBy Alejandro Santamaria Arza