
Sign up to save your podcasts
Or
The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.
Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME’s 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.
We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models.
(0:00) Intro
(1:15) Overview of AI 2027
(2:32) AI Development Timeline
(4:10) Race and Slowdown Branches
(12:52) US vs China
(18:09) Potential AI Misalignment
(31:06) Getting Serious About the Threat of AI
(47:23) Predictions for AI Development by 2027
(48:33) Public and Government Reactions to AI Concerns
(49:27) Policy Recommendations for AI Safety
(52:22) Diverging Views on AI Alignment Timelines
(1:01:30) The Role of Public Awareness in AI Safety
(1:02:38) Reflections on Insider vs. Outsider Strategies
(1:10:53) Future Research and Scenario Planning
(1:14:01) Best and Worst Case Outcomes for AI
(1:17:02) Final Thoughts and Hopes for the Future
With your co-hosts:
@jacobeffron
- Partner at Redpoint, Former PM Flatiron Health
@patrickachase
- Partner at Redpoint, Former ML Engineer LinkedIn
@ericabrescia
- Former COO Github, Founder Bitnami (acq’d by VMWare)
@jordan_segall
- Partner at Redpoint
4.9
3838 ratings
The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.
Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME’s 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.
We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models.
(0:00) Intro
(1:15) Overview of AI 2027
(2:32) AI Development Timeline
(4:10) Race and Slowdown Branches
(12:52) US vs China
(18:09) Potential AI Misalignment
(31:06) Getting Serious About the Threat of AI
(47:23) Predictions for AI Development by 2027
(48:33) Public and Government Reactions to AI Concerns
(49:27) Policy Recommendations for AI Safety
(52:22) Diverging Views on AI Alignment Timelines
(1:01:30) The Role of Public Awareness in AI Safety
(1:02:38) Reflections on Insider vs. Outsider Strategies
(1:10:53) Future Research and Scenario Planning
(1:14:01) Best and Worst Case Outcomes for AI
(1:17:02) Final Thoughts and Hopes for the Future
With your co-hosts:
@jacobeffron
- Partner at Redpoint, Former PM Flatiron Health
@patrickachase
- Partner at Redpoint, Former ML Engineer LinkedIn
@ericabrescia
- Former COO Github, Founder Bitnami (acq’d by VMWare)
@jordan_segall
- Partner at Redpoint
1,005 Listeners
507 Listeners
207 Listeners
187 Listeners
90 Listeners
352 Listeners
395 Listeners
191 Listeners
129 Listeners
196 Listeners
72 Listeners
433 Listeners
33 Listeners
21 Listeners
37 Listeners