
Sign up to save your podcasts
Or


In this episode, Kilian Lieret, Research Software Engineer, and Carlos Jimenez, Computer Science PhD Candidate at Princeton University, discuss SWE-bench and SWE-agent, two groundbreaking tools for evaluating and enhancing AI in software engineering.
Highlights include:
- SWE-bench: A benchmark for assessing AI models on real-world coding tasks.
- Addressing data leakage concerns in GitHub-sourced benchmarks.
- SWE-agent: An AI-driven system for navigating and solving coding challenges.
- Overcoming agent limitations, such as getting stuck in loops.
- The future of AI-powered code reviews and automation in software engineering.
By Databricks4.8
2020 ratings
In this episode, Kilian Lieret, Research Software Engineer, and Carlos Jimenez, Computer Science PhD Candidate at Princeton University, discuss SWE-bench and SWE-agent, two groundbreaking tools for evaluating and enhancing AI in software engineering.
Highlights include:
- SWE-bench: A benchmark for assessing AI models on real-world coding tasks.
- Addressing data leakage concerns in GitHub-sourced benchmarks.
- SWE-agent: An AI-driven system for navigating and solving coding challenges.
- Overcoming agent limitations, such as getting stuck in loops.
- The future of AI-powered code reviews and automation in software engineering.

399 Listeners

26,349 Listeners

9,747 Listeners

476 Listeners

623 Listeners

300 Listeners

226 Listeners

269 Listeners

2,553 Listeners

10,270 Listeners

1,581 Listeners

528 Listeners

668 Listeners

3,480 Listeners

34 Listeners