
Sign up to save your podcasts
Or


In this episode, Kilian Lieret, Research Software Engineer, and Carlos Jimenez, Computer Science PhD Candidate at Princeton University, discuss SWE-bench and SWE-agent, two groundbreaking tools for evaluating and enhancing AI in software engineering.
Highlights include:
- SWE-bench: A benchmark for assessing AI models on real-world coding tasks.
- Addressing data leakage concerns in GitHub-sourced benchmarks.
- SWE-agent: An AI-driven system for navigating and solving coding challenges.
- Overcoming agent limitations, such as getting stuck in loops.
- The future of AI-powered code reviews and automation in software engineering.
By Databricks4.8
2020 ratings
In this episode, Kilian Lieret, Research Software Engineer, and Carlos Jimenez, Computer Science PhD Candidate at Princeton University, discuss SWE-bench and SWE-agent, two groundbreaking tools for evaluating and enhancing AI in software engineering.
Highlights include:
- SWE-bench: A benchmark for assessing AI models on real-world coding tasks.
- Addressing data leakage concerns in GitHub-sourced benchmarks.
- SWE-agent: An AI-driven system for navigating and solving coding challenges.
- Overcoming agent limitations, such as getting stuck in loops.
- The future of AI-powered code reviews and automation in software engineering.

390 Listeners

26,330 Listeners

9,539 Listeners

479 Listeners

625 Listeners

302 Listeners

226 Listeners

269 Listeners

2,548 Listeners

9,927 Listeners

1,566 Listeners

511 Listeners

676 Listeners

3,531 Listeners

35 Listeners