
Sign up to save your podcasts
Or


Lakshya A. Agrawal is a Ph.D. student at U.C. Berkeley! Lakshya has lead the research behind GEPA, one of the newest innovations in DSPy and the use of Large Language Models as Optimizers! GEPA makes three key innovations on how exactly we use LLMs to propose prompts for LLMs, (1) Pareto-Optimal Candidate Selection, (2) Reflective Prompt Mutation, and (3) System-Aware Merging. The podcast discusses all of these details further, as well as topics such as Test-Time Training and the LangProBe benchmarks used in the paper! I hope you find the podcast useful!
By Weaviate4
44 ratings
Lakshya A. Agrawal is a Ph.D. student at U.C. Berkeley! Lakshya has lead the research behind GEPA, one of the newest innovations in DSPy and the use of Large Language Models as Optimizers! GEPA makes three key innovations on how exactly we use LLMs to propose prompts for LLMs, (1) Pareto-Optimal Candidate Selection, (2) Reflective Prompt Mutation, and (3) System-Aware Merging. The podcast discusses all of these details further, as well as topics such as Test-Time Training and the LangProBe benchmarks used in the paper! I hope you find the podcast useful!

205 Listeners

205 Listeners

10,018 Listeners

516 Listeners

130 Listeners

92 Listeners

9 Listeners

52 Listeners