
Sign up to save your podcasts
Or
The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.
Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME’s 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.
We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models.
(0:00) Intro
(1:15) Overview of AI 2027
(2:32) AI Development Timeline
(4:10) Race and Slowdown Branches
(12:52) US vs China
(18:09) Potential AI Misalignment
(31:06) Getting Serious About the Threat of AI
(47:23) Predictions for AI Development by 2027
(48:33) Public and Government Reactions to AI Concerns
(49:27) Policy Recommendations for AI Safety
(52:22) Diverging Views on AI Alignment Timelines
(1:01:30) The Role of Public Awareness in AI Safety
(1:02:38) Reflections on Insider vs. Outsider Strategies
(1:10:53) Future Research and Scenario Planning
(1:14:01) Best and Worst Case Outcomes for AI
(1:17:02) Final Thoughts and Hopes for the Future
With your co-hosts:
@jacobeffron
- Partner at Redpoint, Former PM Flatiron Health
@patrickachase
- Partner at Redpoint, Former ML Engineer LinkedIn
@ericabrescia
- Former COO Github, Founder Bitnami (acq’d by VMWare)
@jordan_segall
- Partner at Redpoint
4.9
4949 ratings
The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.
Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME’s 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.
We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models.
(0:00) Intro
(1:15) Overview of AI 2027
(2:32) AI Development Timeline
(4:10) Race and Slowdown Branches
(12:52) US vs China
(18:09) Potential AI Misalignment
(31:06) Getting Serious About the Threat of AI
(47:23) Predictions for AI Development by 2027
(48:33) Public and Government Reactions to AI Concerns
(49:27) Policy Recommendations for AI Safety
(52:22) Diverging Views on AI Alignment Timelines
(1:01:30) The Role of Public Awareness in AI Safety
(1:02:38) Reflections on Insider vs. Outsider Strategies
(1:10:53) Future Research and Scenario Planning
(1:14:01) Best and Worst Case Outcomes for AI
(1:17:02) Final Thoughts and Hopes for the Future
With your co-hosts:
@jacobeffron
- Partner at Redpoint, Former PM Flatiron Health
@patrickachase
- Partner at Redpoint, Former ML Engineer LinkedIn
@ericabrescia
- Former COO Github, Founder Bitnami (acq’d by VMWare)
@jordan_segall
- Partner at Redpoint
1,273 Listeners
1,040 Listeners
519 Listeners
217 Listeners
88 Listeners
426 Listeners
186 Listeners
121 Listeners
75 Listeners
461 Listeners
31 Listeners
22 Listeners
43 Listeners
35 Listeners
13 Listeners