
Sign up to save your podcasts
Or
In this episode, Kilian Lieret, Research Software Engineer, and Carlos Jimenez, Computer Science PhD Candidate at Princeton University, discuss SWE-bench and SWE-agent, two groundbreaking tools for evaluating and enhancing AI in software engineering.
Highlights include:
- SWE-bench: A benchmark for assessing AI models on real-world coding tasks.
- Addressing data leakage concerns in GitHub-sourced benchmarks.
- SWE-agent: An AI-driven system for navigating and solving coding challenges.
- Overcoming agent limitations, such as getting stuck in loops.
- The future of AI-powered code reviews and automation in software engineering.
5
1818 ratings
In this episode, Kilian Lieret, Research Software Engineer, and Carlos Jimenez, Computer Science PhD Candidate at Princeton University, discuss SWE-bench and SWE-agent, two groundbreaking tools for evaluating and enhancing AI in software engineering.
Highlights include:
- SWE-bench: A benchmark for assessing AI models on real-world coding tasks.
- Addressing data leakage concerns in GitHub-sourced benchmarks.
- SWE-agent: An AI-driven system for navigating and solving coding challenges.
- Overcoming agent limitations, such as getting stuck in loops.
- The future of AI-powered code reviews and automation in software engineering.
152 Listeners
1,012 Listeners
42 Listeners
507 Listeners
627 Listeners
442 Listeners
295 Listeners
326 Listeners
141 Listeners
266 Listeners
123 Listeners
76 Listeners
444 Listeners
36 Listeners
52 Listeners