
Sign up to save your podcasts
Or


This is an interview with Joel Predd, a senior engineer at the RAND Corporation and co-author of RAND’s work on “five hard national security problems from AGI,”.
In this conversation, Joel lays out a sober frame for leaders: treat AGI as technically credible but deeply uncertain; assume it will be transformational if it arrives; and recognize that the pace of progress is outstripping our capacity for governance.
This is the fourth installment of our "US-China AGI Relations" series - where we explore pathways to achieving international AGI cooperation while avoiding conflicts and arms races.
This episode referred to the following other essays and resources:
Artificial General Intelligence's Five Hard National Security Problems: https://www.rand.org/pubs/perspectives/PEA3691-4.html?
Types of AI Disasters – Uniting and Dividing: https://danfaggella.com/disaster/
Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
Watch the full episode on YouTube: https://youtu.be/Ojg9l5q-gao
See the full article from this episode: https://danfaggella.com/predd1
…
There are three main questions we cover here on the Trajectory:
1. Who are the power players in AGI and what are their incentives?
2. What kind of posthuman future are we moving towards, or should we be moving towards?
3. What should we do about it?
If this sounds like it's up your alley, then be sure to stick around and connect:
-- Blog: danfaggella.com/trajectory
-- X: x.com/danfaggella
-- LinkedIn: linkedin.com/in/danfaggella
-- Newsletter: bit.ly/TrajectoryTw
-- YouTube: https://www.youtube.com/@trajectoryai
By Daniel Faggella4.7
66 ratings
This is an interview with Joel Predd, a senior engineer at the RAND Corporation and co-author of RAND’s work on “five hard national security problems from AGI,”.
In this conversation, Joel lays out a sober frame for leaders: treat AGI as technically credible but deeply uncertain; assume it will be transformational if it arrives; and recognize that the pace of progress is outstripping our capacity for governance.
This is the fourth installment of our "US-China AGI Relations" series - where we explore pathways to achieving international AGI cooperation while avoiding conflicts and arms races.
This episode referred to the following other essays and resources:
Artificial General Intelligence's Five Hard National Security Problems: https://www.rand.org/pubs/perspectives/PEA3691-4.html?
Types of AI Disasters – Uniting and Dividing: https://danfaggella.com/disaster/
Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
Watch the full episode on YouTube: https://youtu.be/Ojg9l5q-gao
See the full article from this episode: https://danfaggella.com/predd1
…
There are three main questions we cover here on the Trajectory:
1. Who are the power players in AGI and what are their incentives?
2. What kind of posthuman future are we moving towards, or should we be moving towards?
3. What should we do about it?
If this sounds like it's up your alley, then be sure to stick around and connect:
-- Blog: danfaggella.com/trajectory
-- X: x.com/danfaggella
-- LinkedIn: linkedin.com/in/danfaggella
-- Newsletter: bit.ly/TrajectoryTw
-- YouTube: https://www.youtube.com/@trajectoryai

2,687 Listeners

2,451 Listeners

1,062 Listeners

4,178 Listeners

1,601 Listeners

201 Listeners

95 Listeners

512 Listeners

28 Listeners

494 Listeners

547 Listeners

130 Listeners

265 Listeners

40 Listeners

134 Listeners