Machine Learning Street Talk (MLST)

AI Alignment & AGI Fire Alarm - Connor Leahy


Listen Later

This week Dr. Tim Scarfe, Alex Stenlake and Yannic Kilcher speak with AGI and AI alignment specialist Connor Leahy a machine learning engineer from Aleph Alpha and founder of EleutherAI.


Connor believes that AI alignment is philosophy with a deadline and that we are on the precipice, the stakes are astronomical. AI is important, and it will go wrong by default. Connor thinks that the singularity or intelligence explosion is near. Connor says that AGI is like climate change but worse, even harder problems, even shorter deadline and even worse consequences for the future. These problems are hard, and nobody knows what to do about them.


00:00:00 Introduction to AI alignment and AGI fire alarm 

00:15:16 Main Show Intro 

00:18:38 Different schools of thought on AI safety 

00:24:03 What is intelligence? 

00:25:48 AI Alignment 

00:27:39 Humans dont have a coherent utility function 

00:28:13 Newcomb's paradox and advanced decision problems 

00:34:01 Incentives and behavioural economics 

00:37:19 Prisoner's dilemma 

00:40:24 Ayn Rand and game theory in politics and business 

00:44:04 Instrumental convergence and orthogonality thesis 

00:46:14 Utility functions and the Stop button problem 

00:55:24 AI corrigibality - self alignment 

00:56:16 Decision theory and stability / wireheading / robust delegation 

00:59:30 Stop button problem 

01:00:40 Making the world a better place 

01:03:43 Is intelligence a search problem? 

01:04:39 Mesa optimisation / humans are misaligned AI 

01:06:04 Inner vs outer alignment / faulty reward functions 

01:07:31 Large corporations are intelligent and have no stop function 

01:10:21 Dutch booking / what is rationality / decision theory 

01:16:32 Understanding very powerful AIs 

01:18:03 Kolmogorov complexity 

01:19:52 GPT-3 - is it intelligent, are humans even intelligent? 

01:28:40 Scaling hypothesis 

01:29:30 Connor thought DL was dead in 2017 

01:37:54 Why is GPT-3 as intelligent as a human 

01:44:43 Jeff Hawkins on intelligence as compression and the great lookup table 

01:50:28 AI ethics related to AI alignment? 

01:53:26 Interpretability 

01:56:27 Regulation 

01:57:54 Intelligence explosion 



Discord: https://discord.com/invite/vtRgjbM

EleutherAI: https://www.eleuther.ai

Twitter: https://twitter.com/npcollapse

LinkedIn: https://www.linkedin.com/in/connor-j-leahy/

...more
View all episodesView all episodes
Download on the App Store

Machine Learning Street Talk (MLST)By Machine Learning Street Talk (MLST)

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

90 ratings


More shows like Machine Learning Street Talk (MLST)

View all
Data Skeptic by Kyle Polich

Data Skeptic

478 Listeners

a16z Show by Andreessen Horowitz

a16z Show

1,085 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

303 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

333 Listeners

Y Combinator Startup Podcast by Y Combinator

Y Combinator Startup Podcast

226 Listeners

Practical AI by Practical AI LLC

Practical AI

209 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

93 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

200 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

507 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

486 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

136 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

209 Listeners

AI + a16z by a16z

AI + a16z

35 Listeners

Training Data by Sequoia Capital

Training Data

38 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

133 Listeners