
Sign up to save your podcasts
Or
For this week's paper read, we dive into our own research.
We wanted to create a replicable, evolving dataset that can keep pace with model training so that you always know you're testing with data your model has never seen before. We also saw the prohibitively high cost of running LLM evals at scale, and have used our data to fine-tune a series of SLMs that perform just as well as their base LLM counterparts, but at 1/10 the cost.
So, over the past few weeks, the Arize team generated the largest public dataset of hallucinations, as well as a series of fine-tuned evaluation models.
We talk about what we built, the process we took, and the bottom line results. You can read the recap of LibreEval here. Dive into the research, or sign up to join us next time.
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
5
1313 ratings
For this week's paper read, we dive into our own research.
We wanted to create a replicable, evolving dataset that can keep pace with model training so that you always know you're testing with data your model has never seen before. We also saw the prohibitively high cost of running LLM evals at scale, and have used our data to fine-tune a series of SLMs that perform just as well as their base LLM counterparts, but at 1/10 the cost.
So, over the past few weeks, the Arize team generated the largest public dataset of hallucinations, as well as a series of fine-tuned evaluation models.
We talk about what we built, the process we took, and the bottom line results. You can read the recap of LibreEval here. Dive into the research, or sign up to join us next time.
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
298 Listeners
331 Listeners
217 Listeners
192 Listeners
198 Listeners
298 Listeners
88 Listeners
426 Listeners
121 Listeners
142 Listeners
201 Listeners
75 Listeners
491 Listeners
31 Listeners
43 Listeners