
Sign up to save your podcasts
Or


In this episode of the Engineering Enablement podcast, host Abi Noda is joined by Quentin Anthony, Head of Model Training at Zyphra and a contributor at EleutherAI. Quentin participated in METR’s recent study on AI coding tools, which revealed that developers often slowed down when using AI—despite feeling more productive. He and Abi unpack the unexpected results of the study, which tasks AI tools actually help with, and how engineering teams can adopt them more effectively by focusing on task-level fit and developing better digital hygiene.
Where to find Quentin Anthony:
• LinkedIn: https://www.linkedin.com/in/quentin-anthony/
• X: https://x.com/QuentinAnthon15
Where to find Abi Noda:
• LinkedIn: https://www.linkedin.com/in/abinoda
In this episode, we cover:
(00:00) Intro
(01:32) A brief overview of Quentin’s background and current work
(02:05) An explanation of METR and the study Quentin participated in
(11:02) Surprising results of the METR study
(12:47) Quentin’s takeaways from the study’s results
(16:30) How developers can avoid bloated code bases through self-reflection
(19:31) Signs that you’re not making progress with a model
(21:25) What is “context rot”?
(23:04) Advice for combating context rot
(25:34) How to make the most of your idle time as a developer
(28:13) Developer hygiene: the case for selectively using AI tools
(33:28) How to interact effectively with new models
(35:28) Why organizations should focus on tasks that AI handles well
(38:01) Where AI fits in the software development lifecycle
(39:40) How to approach testing with models
(40:31) What makes models different
(42:05) Quentin’s thoughts on agents
Referenced:
By DX5
3838 ratings
In this episode of the Engineering Enablement podcast, host Abi Noda is joined by Quentin Anthony, Head of Model Training at Zyphra and a contributor at EleutherAI. Quentin participated in METR’s recent study on AI coding tools, which revealed that developers often slowed down when using AI—despite feeling more productive. He and Abi unpack the unexpected results of the study, which tasks AI tools actually help with, and how engineering teams can adopt them more effectively by focusing on task-level fit and developing better digital hygiene.
Where to find Quentin Anthony:
• LinkedIn: https://www.linkedin.com/in/quentin-anthony/
• X: https://x.com/QuentinAnthon15
Where to find Abi Noda:
• LinkedIn: https://www.linkedin.com/in/abinoda
In this episode, we cover:
(00:00) Intro
(01:32) A brief overview of Quentin’s background and current work
(02:05) An explanation of METR and the study Quentin participated in
(11:02) Surprising results of the METR study
(12:47) Quentin’s takeaways from the study’s results
(16:30) How developers can avoid bloated code bases through self-reflection
(19:31) Signs that you’re not making progress with a model
(21:25) What is “context rot”?
(23:04) Advice for combating context rot
(25:34) How to make the most of your idle time as a developer
(28:13) Developer hygiene: the case for selectively using AI tools
(33:28) How to interact effectively with new models
(35:28) Why organizations should focus on tasks that AI handles well
(38:01) Where AI fits in the software development lifecycle
(39:40) How to approach testing with models
(40:31) What makes models different
(42:05) Quentin’s thoughts on agents
Referenced:

273 Listeners

291 Listeners

1,086 Listeners

625 Listeners

154 Listeners

284 Listeners

42 Listeners

144 Listeners

986 Listeners

210 Listeners

209 Listeners

62 Listeners

131 Listeners

94 Listeners

64 Listeners