(00:00) Introduction
(01:12) EffiSciences, SaferAI
(02:31) Concrete AI Auditing Proposals
(04:56) We Need 10K People Working On Alignment
(11:08) What's AI Alignment
(13:07) GPT-3 Is Already Decent At Reasoning
(17:11) AI Regulation Is Easier In Short Timelines
(24:33) Why Is Awareness About Alignment Not Widespread?
(32:02) Coding AIs Enable Feedback Loops In AI Research
(36:08) Technical Talent Is The Bottleneck In AI Research
(37:58): 'Fast Takeoff' Is Asymptotic Improvement In AI Capabilities
(43:52) Bear Market Can Somewhat Delay The Arrival Of AGI
(45:55) AGI Need Not Require Much Intelligence To Do Damage
(49:38) Putting Numbers On Confidence
(54:36) RL On Top Of Coding AIs
(58:21) Betting On Arrival Of AGI
(01:01:47) Power-Seeking AIs Are The Objects Of Concern
(01:06:43) Scenarios & Probability Of Longer Timelines
(01:12:43) Coordination
(01:22:49) Compute Governance Seems Relatively Feasible
(01:32:32) The Recent Ban On Chips Export To China
(01:38:20) AI Governance & Fieldbuilding Were Very Neglected
(01:44:42) Students Are More Likely To Change Their Minds About Things
(01:53:04) Bootcamps Are Better Medium Of Outreach
(02:01:33) Concluding Thoughts