
Sign up to save your podcasts
Or
The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.
Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME’s 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.
We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models.
(0:00) Intro
(1:15) Overview of AI 2027
(2:32) AI Development Timeline
(4:10) Race and Slowdown Branches
(12:52) US vs China
(18:09) Potential AI Misalignment
(31:06) Getting Serious About the Threat of AI
(47:23) Predictions for AI Development by 2027
(48:33) Public and Government Reactions to AI Concerns
(49:27) Policy Recommendations for AI Safety
(52:22) Diverging Views on AI Alignment Timelines
(1:01:30) The Role of Public Awareness in AI Safety
(1:02:38) Reflections on Insider vs. Outsider Strategies
(1:10:53) Future Research and Scenario Planning
(1:14:01) Best and Worst Case Outcomes for AI
(1:17:02) Final Thoughts and Hopes for the Future
With your co-hosts:
@jacobeffron
- Partner at Redpoint, Former PM Flatiron Health
@patrickachase
- Partner at Redpoint, Former ML Engineer LinkedIn
@ericabrescia
- Former COO Github, Founder Bitnami (acq’d by VMWare)
@jordan_segall
- Partner at Redpoint
4.9
4949 ratings
The recent AI 2027 report sparked widespread discussion with its stark warnings about the near-term risks of unaligned AI.
Authors @Daniel Kokotajlo (former OpenAI researcher now focused full-time on alignment through his nonprofit, @AI Futures, and one of TIME’s 100 most influential people in AI) and @Thomas Larsen joined the show to unpack their findings.
We talk through the key takeaways from the report, its policy implications, and what they believe it will take to build safer, more aligned models.
(0:00) Intro
(1:15) Overview of AI 2027
(2:32) AI Development Timeline
(4:10) Race and Slowdown Branches
(12:52) US vs China
(18:09) Potential AI Misalignment
(31:06) Getting Serious About the Threat of AI
(47:23) Predictions for AI Development by 2027
(48:33) Public and Government Reactions to AI Concerns
(49:27) Policy Recommendations for AI Safety
(52:22) Diverging Views on AI Alignment Timelines
(1:01:30) The Role of Public Awareness in AI Safety
(1:02:38) Reflections on Insider vs. Outsider Strategies
(1:10:53) Future Research and Scenario Planning
(1:14:01) Best and Worst Case Outcomes for AI
(1:17:02) Final Thoughts and Hopes for the Future
With your co-hosts:
@jacobeffron
- Partner at Redpoint, Former PM Flatiron Health
@patrickachase
- Partner at Redpoint, Former ML Engineer LinkedIn
@ericabrescia
- Former COO Github, Founder Bitnami (acq’d by VMWare)
@jordan_segall
- Partner at Redpoint
1,272 Listeners
1,063 Listeners
519 Listeners
224 Listeners
189 Listeners
91 Listeners
423 Listeners
126 Listeners
69 Listeners
509 Listeners
463 Listeners
32 Listeners
19 Listeners
44 Listeners
33 Listeners