
Sign up to save your podcasts
Or
This week we talk about modern AI benchmarks, taking a close look at Google's recent Gemini 2.5 release and its performance on key evaluations, notably Humanity's Last Exam (HLE). In the session we covered Gemini 2.5's architecture, its advancements in reasoning and multimodality, and its impressive context window. We also talked about how benchmarks like HLE and ARC AGI 2 help us understand the current state and future direction of AI.
Read it on the blog: https://arize.com/blog/ai-benchmark-deep-dive-gemini-humanitys-last-exam/
Sign up to watch the next live recording: https://arize.com/resource/community-papers-reading/
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
5
1313 ratings
This week we talk about modern AI benchmarks, taking a close look at Google's recent Gemini 2.5 release and its performance on key evaluations, notably Humanity's Last Exam (HLE). In the session we covered Gemini 2.5's architecture, its advancements in reasoning and multimodality, and its impressive context window. We also talked about how benchmarks like HLE and ARC AGI 2 help us understand the current state and future direction of AI.
Read it on the blog: https://arize.com/blog/ai-benchmark-deep-dive-gemini-humanitys-last-exam/
Sign up to watch the next live recording: https://arize.com/resource/community-papers-reading/
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
1,007 Listeners
587 Listeners
442 Listeners
296 Listeners
321 Listeners
210 Listeners
188 Listeners
90 Listeners
350 Listeners
128 Listeners
196 Listeners
72 Listeners
33 Listeners
22 Listeners
37 Listeners