
Sign up to save your podcasts
Or
Lakshya A. Agrawal is a Ph.D. student at U.C. Berkeley! Lakshya has lead the research behind GEPA, one of the newest innovations in DSPy and the use of Large Language Models as Optimizers! GEPA makes three key innovations on how exactly we use LLMs to propose prompts for LLMs, (1) Pareto-Optimal Candidate Selection, (2) Reflective Prompt Mutation, and (3) System-Aware Merging. The podcast discusses all of these details further, as well as topics such as Test-Time Training and the LangProBe benchmarks used in the paper! I hope you find the podcast useful!
4
44 ratings
Lakshya A. Agrawal is a Ph.D. student at U.C. Berkeley! Lakshya has lead the research behind GEPA, one of the newest innovations in DSPy and the use of Large Language Models as Optimizers! GEPA makes three key innovations on how exactly we use LLMs to propose prompts for LLMs, (1) Pareto-Optimal Candidate Selection, (2) Reflective Prompt Mutation, and (3) System-Aware Merging. The podcast discusses all of these details further, as well as topics such as Test-Time Training and the LangProBe benchmarks used in the paper! I hope you find the podcast useful!
3,393 Listeners
1,063 Listeners
4,216 Listeners
295 Listeners
224 Listeners
269 Listeners
189 Listeners
297 Listeners
9,252 Listeners
423 Listeners
126 Listeners
69 Listeners
463 Listeners
32 Listeners
30 Listeners