
Sign up to save your podcasts
Or


In this episode, Kilian Lieret, Research Software Engineer, and Carlos Jimenez, Computer Science PhD Candidate at Princeton University, discuss SWE-bench and SWE-agent, two groundbreaking tools for evaluating and enhancing AI in software engineering.
Highlights include:
- SWE-bench: A benchmark for assessing AI models on real-world coding tasks.
- Addressing data leakage concerns in GitHub-sourced benchmarks.
- SWE-agent: An AI-driven system for navigating and solving coding challenges.
- Overcoming agent limitations, such as getting stuck in loops.
- The future of AI-powered code reviews and automation in software engineering.
By Databricks4.8
2020 ratings
In this episode, Kilian Lieret, Research Software Engineer, and Carlos Jimenez, Computer Science PhD Candidate at Princeton University, discuss SWE-bench and SWE-agent, two groundbreaking tools for evaluating and enhancing AI in software engineering.
Highlights include:
- SWE-bench: A benchmark for assessing AI models on real-world coding tasks.
- Addressing data leakage concerns in GitHub-sourced benchmarks.
- SWE-agent: An AI-driven system for navigating and solving coding challenges.
- Overcoming agent limitations, such as getting stuck in loops.
- The future of AI-powered code reviews and automation in software engineering.

378 Listeners

26,316 Listeners

9,553 Listeners

479 Listeners

625 Listeners

303 Listeners

225 Listeners

267 Listeners

2,552 Listeners

10,038 Listeners

1,569 Listeners

527 Listeners

669 Listeners

3,516 Listeners

35 Listeners