
Sign up to save your podcasts
Or


(See timestamps below)
CONTACT INFO:
SEE MORE OF ME:
- Twitter:
- YouTube:
EPISODE LINKS:
TIMESTAMPS:
(00:00) - Introduction
(03:16) - Recursive self-improvement, how long until superintelligence
(10:50) - What can be learned in the digital realm
(14:21) - How fast can it learn in the real world
(18:34) - Can AGI become better than us?
(22:54) - Complex enough environment to create superintelligence?
(29:10) - Can AGI Thomas take over the world?
(37:40) - Is superintelligence irrelevant for safety?
(41:38) - Existential risk from AI?
(48:09) - How to decrease the chance of a bad outcome?
(49:08) - Regulations
(53:19) - ChatGPT and the best current models
(59:57) - Solution to the treacherous turn?
(1:05:01) - AGI becomes religious?
(1:11:03) - Starting point of the intelligence explosion?
(1:16:49) - OpenAI Alignment approach blog post
(1:18:29) - Is Open source bad for safety?
(1:24:49) - How to contact me
By Alex van der Meer(See timestamps below)
CONTACT INFO:
SEE MORE OF ME:
- Twitter:
- YouTube:
EPISODE LINKS:
TIMESTAMPS:
(00:00) - Introduction
(03:16) - Recursive self-improvement, how long until superintelligence
(10:50) - What can be learned in the digital realm
(14:21) - How fast can it learn in the real world
(18:34) - Can AGI become better than us?
(22:54) - Complex enough environment to create superintelligence?
(29:10) - Can AGI Thomas take over the world?
(37:40) - Is superintelligence irrelevant for safety?
(41:38) - Existential risk from AI?
(48:09) - How to decrease the chance of a bad outcome?
(49:08) - Regulations
(53:19) - ChatGPT and the best current models
(59:57) - Solution to the treacherous turn?
(1:05:01) - AGI becomes religious?
(1:11:03) - Starting point of the intelligence explosion?
(1:16:49) - OpenAI Alignment approach blog post
(1:18:29) - Is Open source bad for safety?
(1:24:49) - How to contact me