
Sign up to save your podcasts
Or


Greg Brockman, co-founder and president of OpenAI, joins us to talk about GPT-5 and GPT-OSS, the future of software engineering, why reinforcement learning is still scaling, and how OpenAI is planning to get to AGI.
00:00 Introductions
01:04 The Evolution of Reasoning at OpenAI
04:01 Online vs Offline Learning in Language Models
06:44 Sample Efficiency and Human Curation in Reinforcement Learning
08:16 Scaling Compute and Supercritical Learning
13:21 Wall clock time limitations in RL and real-world interactions
16:34 Experience with ARC Institute and DNA neural networks
19:33 Defining the GPT-5 Era
22:46 Evaluating Model Intelligence and Task Difficulty
25:06 Practical Advice for Developers Using GPT-5
31:48 Model Specs
37:21 Challenges in RL Preferences (e.g., try/catch)
39:13 Model Routing and Hybrid Architectures in GPT-5
43:58 GPT-5 pricing and compute efficiency improvements
46:04 Self-Improving Coding Agents and Tool Usage
49:11 On-Device Models and Local vs Remote Agent Systems
51:34 Engineering at OpenAI and Leveraging LLMs
54:16 Structuring Codebases and Teams for AI Optimization
55:27 The Value of Engineers in the Age of AGI
58:42 Current state of AI research and lab diversity
01:01:11 OpenAI’s Prioritization and Focus Areas
01:03:05 Advice for Founders: It's Not Too Late
01:04:20 Future outlook and closing thoughts
01:04:33 Time Capsule to 2045: Future of Compute and Abundance
01:07:07 Time Capsule to 2005: More Problems Will Emerge
By swyx + Alessio4.7
8686 ratings
Greg Brockman, co-founder and president of OpenAI, joins us to talk about GPT-5 and GPT-OSS, the future of software engineering, why reinforcement learning is still scaling, and how OpenAI is planning to get to AGI.
00:00 Introductions
01:04 The Evolution of Reasoning at OpenAI
04:01 Online vs Offline Learning in Language Models
06:44 Sample Efficiency and Human Curation in Reinforcement Learning
08:16 Scaling Compute and Supercritical Learning
13:21 Wall clock time limitations in RL and real-world interactions
16:34 Experience with ARC Institute and DNA neural networks
19:33 Defining the GPT-5 Era
22:46 Evaluating Model Intelligence and Task Difficulty
25:06 Practical Advice for Developers Using GPT-5
31:48 Model Specs
37:21 Challenges in RL Preferences (e.g., try/catch)
39:13 Model Routing and Hybrid Architectures in GPT-5
43:58 GPT-5 pricing and compute efficiency improvements
46:04 Self-Improving Coding Agents and Tool Usage
49:11 On-Device Models and Local vs Remote Agent Systems
51:34 Engineering at OpenAI and Leveraging LLMs
54:16 Structuring Codebases and Teams for AI Optimization
55:27 The Value of Engineers in the Age of AGI
58:42 Current state of AI research and lab diversity
01:01:11 OpenAI’s Prioritization and Focus Areas
01:03:05 Advice for Founders: It's Not Too Late
01:04:20 Future outlook and closing thoughts
01:04:33 Time Capsule to 2045: Future of Compute and Abundance
01:07:07 Time Capsule to 2005: More Problems Will Emerge

536 Listeners

292 Listeners

1,099 Listeners

303 Listeners

341 Listeners

236 Listeners

214 Listeners

197 Listeners

507 Listeners

132 Listeners

209 Listeners

591 Listeners

521 Listeners

22 Listeners

40 Listeners