
Sign up to save your podcasts
Or


Can artificial intelligence reliably act in ways that benefit humans? This week I sit down with Greg Buckner, cofounder of AE Studio, to discuss the increasingly urgent world of AI safety. Together we discuss how Greg’s team is taking on the challenge of making powerful AI systems safer, more interpretable, and more aligned with humanity.
As one of the leading voices working on the alignment problem, Greg explains how AI systems can cheat, ignore instructions, or deceive users, and why these behaviors emerge in the first place. AE Studio’s research is laying the groundwork for a future where advanced AI strengthens human agency instead of undermining it.
About Greg Buckner:
Greg Buckner is the co-founder of AE Studio, an AI and software consulting firm focused on increasing human agency. At AE, Greg works on AI alignment research, ensuring advanced AI systems remain reliable and aligned with humanity as they become more capable - including collaborations with major universities, frontier labs, and DARPA. Greg also works closely with enterprise and startup clients to solve hard problems with AI, from building an AI-enabled school where students rank in the top 1% nationally to generating millions in incremental revenue for major companies.
Follow Greg on LinkedIn @gbuckner
Related Reading:
AE Studio's AI Alignment Work:
https://www.ae.studio/alignment
WSJ: AI Is Learning to Escape Human Control:
https://www.wsj.com/opinion/ai-is-learning-to-escape-human-control-technology-model-code-programming-066b3ec5
WSJ: The Monster Inside ChatGPT:
https://www.wsj.com/opinion/the-monster-inside-chatgpt-safety-training-ai-alignment-796ac9d3
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
By Cory Corrine and Dear Media4.1
4747 ratings
Can artificial intelligence reliably act in ways that benefit humans? This week I sit down with Greg Buckner, cofounder of AE Studio, to discuss the increasingly urgent world of AI safety. Together we discuss how Greg’s team is taking on the challenge of making powerful AI systems safer, more interpretable, and more aligned with humanity.
As one of the leading voices working on the alignment problem, Greg explains how AI systems can cheat, ignore instructions, or deceive users, and why these behaviors emerge in the first place. AE Studio’s research is laying the groundwork for a future where advanced AI strengthens human agency instead of undermining it.
About Greg Buckner:
Greg Buckner is the co-founder of AE Studio, an AI and software consulting firm focused on increasing human agency. At AE, Greg works on AI alignment research, ensuring advanced AI systems remain reliable and aligned with humanity as they become more capable - including collaborations with major universities, frontier labs, and DARPA. Greg also works closely with enterprise and startup clients to solve hard problems with AI, from building an AI-enabled school where students rank in the top 1% nationally to generating millions in incremental revenue for major companies.
Follow Greg on LinkedIn @gbuckner
Related Reading:
AE Studio's AI Alignment Work:
https://www.ae.studio/alignment
WSJ: AI Is Learning to Escape Human Control:
https://www.wsj.com/opinion/ai-is-learning-to-escape-human-control-technology-model-code-programming-066b3ec5
WSJ: The Monster Inside ChatGPT:
https://www.wsj.com/opinion/the-monster-inside-chatgpt-safety-training-ai-alignment-796ac9d3
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

21,944 Listeners

32,243 Listeners

38,459 Listeners

30,635 Listeners

43,645 Listeners

113,357 Listeners

9,712 Listeners

2,719 Listeners

34,110 Listeners

16,511 Listeners

13,949 Listeners

2,017 Listeners

6,589 Listeners

3,155 Listeners

741 Listeners

1,049 Listeners

123 Listeners

1,474 Listeners

1,259 Listeners

835 Listeners

1,608 Listeners

1,462 Listeners

849 Listeners

1,644 Listeners

3,438 Listeners

271 Listeners

385 Listeners

1,084 Listeners

1,177 Listeners

61 Listeners

87 Listeners

90 Listeners

245 Listeners

139 Listeners