
Sign up to save your podcasts
Or
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded StakeOut.AI, a non-profit focused on making AI go well for humans.
00:54 - Intro
03:15 - Dr. Park, x-risk, and AGI
08:55 - StakeOut.AI
12:05 - Governance scorecard
19:34 - Hollywood webinar
22:02 - Regulations.gov comments
23:48 - Open letters
26:15 - EU AI Act
35:07 - Effective accelerationism
40:50 - Divide and conquer dynamics
45:40 - AI "art"
53:09 - Outro
Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded StakeOut.AI, a non-profit focused on making AI go well for humans.
00:54 - Intro
03:15 - Dr. Park, x-risk, and AGI
08:55 - StakeOut.AI
12:05 - Governance scorecard
19:34 - Hollywood webinar
22:02 - Regulations.gov comments
23:48 - Open letters
26:15 - EU AI Act
35:07 - Effective accelerationism
40:50 - Divide and conquer dynamics
45:40 - AI "art"
53:09 - Outro
Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.