
Sign up to save your podcasts
Or


Can artificial intelligence reliably act in ways that benefit humans? This week I sit down with Greg Buckner, cofounder of AE Studio, to discuss the increasingly urgent world of AI safety. Together we discuss how Greg’s team is taking on the challenge of making powerful AI systems safer, more interpretable, and more aligned with humanity.
As one of the leading voices working on the alignment problem, Greg explains how AI systems can cheat, ignore instructions, or deceive users, and why these behaviors emerge in the first place. AE Studio’s research is laying the groundwork for a future where advanced AI strengthens human agency instead of undermining it.
About Greg Buckner:
Greg Buckner is the co-founder of AE Studio, an AI and software consulting firm focused on increasing human agency. At AE, Greg works on AI alignment research, ensuring advanced AI systems remain reliable and aligned with humanity as they become more capable - including collaborations with major universities, frontier labs, and DARPA. Greg also works closely with enterprise and startup clients to solve hard problems with AI, from building an AI-enabled school where students rank in the top 1% nationally to generating millions in incremental revenue for major companies.
Follow Greg on LinkedIn @gbuckner
Related Reading:
AE Studio's AI Alignment Work:
https://www.ae.studio/alignment
WSJ: AI Is Learning to Escape Human Control:
https://www.wsj.com/opinion/ai-is-learning-to-escape-human-control-technology-model-code-programming-066b3ec5
WSJ: The Monster Inside ChatGPT:
https://www.wsj.com/opinion/the-monster-inside-chatgpt-safety-training-ai-alignment-796ac9d3
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
By Cory Corrine and Dear Media4.1
4747 ratings
Can artificial intelligence reliably act in ways that benefit humans? This week I sit down with Greg Buckner, cofounder of AE Studio, to discuss the increasingly urgent world of AI safety. Together we discuss how Greg’s team is taking on the challenge of making powerful AI systems safer, more interpretable, and more aligned with humanity.
As one of the leading voices working on the alignment problem, Greg explains how AI systems can cheat, ignore instructions, or deceive users, and why these behaviors emerge in the first place. AE Studio’s research is laying the groundwork for a future where advanced AI strengthens human agency instead of undermining it.
About Greg Buckner:
Greg Buckner is the co-founder of AE Studio, an AI and software consulting firm focused on increasing human agency. At AE, Greg works on AI alignment research, ensuring advanced AI systems remain reliable and aligned with humanity as they become more capable - including collaborations with major universities, frontier labs, and DARPA. Greg also works closely with enterprise and startup clients to solve hard problems with AI, from building an AI-enabled school where students rank in the top 1% nationally to generating millions in incremental revenue for major companies.
Follow Greg on LinkedIn @gbuckner
Related Reading:
AE Studio's AI Alignment Work:
https://www.ae.studio/alignment
WSJ: AI Is Learning to Escape Human Control:
https://www.wsj.com/opinion/ai-is-learning-to-escape-human-control-technology-model-code-programming-066b3ec5
WSJ: The Monster Inside ChatGPT:
https://www.wsj.com/opinion/the-monster-inside-chatgpt-safety-training-ai-alignment-796ac9d3
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

22,033 Listeners

32,221 Listeners

38,590 Listeners

30,838 Listeners

43,700 Listeners

112,856 Listeners

9,704 Listeners

2,711 Listeners

34,112 Listeners

16,386 Listeners

13,934 Listeners

2,018 Listeners

6,568 Listeners

3,153 Listeners

742 Listeners

1,050 Listeners

123 Listeners

1,466 Listeners

1,258 Listeners

837 Listeners

1,577 Listeners

1,483 Listeners

832 Listeners

1,681 Listeners

3,398 Listeners

270 Listeners

353 Listeners

1,082 Listeners

1,198 Listeners

61 Listeners

88 Listeners

89 Listeners

243 Listeners

143 Listeners