
Sign up to save your podcasts
Or


In episode 72 of The Gradient Podcast, Daniel Bashir speaks to Professor Scott Aaronson.
Scott is the Schlumberger Centennial Chair of Computer Science at the University of Texas at Austin and director of its Quantum Information Center. His research interests focus on the capabilities and limits of quantum computers and computational complexity theory more broadly. He has recently been on leave to work at OpenAI, where he is researching theoretical foundations of AI safety.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:45) Scott’s background
* (02:50) Starting grad school in AI, transitioning to quantum computing and the AI / quantum computing intersection
* (05:30) Where quantum computers can give us exponential speedups, simulation overhead, Grover’s algorithm
* (10:50) Overselling of quantum computing applied to AI, Scott’s analysis on quantum machine learning
* (18:45) ML problems that involve quantum mechanics and Scott’s work
* (21:50) Scott’s recent work at OpenAI
* (22:30) Why Scott was skeptical of AI alignment work early on
* (26:30) Unexpected improvements in modern AI and Scott’s belief update
* (32:30) Preliminary Analysis of DALL-E 2 (Marcus & Davis)
* (34:15) Watermarking GPT outputs
* (41:00) Motivations for watermarking and language model detection
* (45:00) Ways around watermarking
* (46:40) Other aspects of Scott’s experience with OpenAI, theoretical problems
* (49:10) Thoughts on definitions for humanistic concepts in AI
* (58:45) Scott’s “reform AI alignment stance” and Eliezer Yudkowsky’s recent comments (+ Daniel pronounces Eliezer wrong), orthogonality thesis, cases for stopping scaling
* (1:08:45) Outro
Links:
* Scott’s blog
* AI-related work
* Quantum Machine Learning Algorithms: Read the Fine Print
* A very preliminary analysis of DALL-E 2 w/ Marcus and Davis
* New AI classifier for indicating AI-written text and Watermarking GPT Outputs
* Writing
* Should GPT exist?
* AI Safety Lecture
* Why I’m not terrified of AI
By Daniel Bashir4.7
4747 ratings
In episode 72 of The Gradient Podcast, Daniel Bashir speaks to Professor Scott Aaronson.
Scott is the Schlumberger Centennial Chair of Computer Science at the University of Texas at Austin and director of its Quantum Information Center. His research interests focus on the capabilities and limits of quantum computers and computational complexity theory more broadly. He has recently been on leave to work at OpenAI, where he is researching theoretical foundations of AI safety.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected]
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:45) Scott’s background
* (02:50) Starting grad school in AI, transitioning to quantum computing and the AI / quantum computing intersection
* (05:30) Where quantum computers can give us exponential speedups, simulation overhead, Grover’s algorithm
* (10:50) Overselling of quantum computing applied to AI, Scott’s analysis on quantum machine learning
* (18:45) ML problems that involve quantum mechanics and Scott’s work
* (21:50) Scott’s recent work at OpenAI
* (22:30) Why Scott was skeptical of AI alignment work early on
* (26:30) Unexpected improvements in modern AI and Scott’s belief update
* (32:30) Preliminary Analysis of DALL-E 2 (Marcus & Davis)
* (34:15) Watermarking GPT outputs
* (41:00) Motivations for watermarking and language model detection
* (45:00) Ways around watermarking
* (46:40) Other aspects of Scott’s experience with OpenAI, theoretical problems
* (49:10) Thoughts on definitions for humanistic concepts in AI
* (58:45) Scott’s “reform AI alignment stance” and Eliezer Yudkowsky’s recent comments (+ Daniel pronounces Eliezer wrong), orthogonality thesis, cases for stopping scaling
* (1:08:45) Outro
Links:
* Scott’s blog
* AI-related work
* Quantum Machine Learning Algorithms: Read the Fine Print
* A very preliminary analysis of DALL-E 2 w/ Marcus and Davis
* New AI classifier for indicating AI-written text and Watermarking GPT Outputs
* Writing
* Should GPT exist?
* AI Safety Lecture
* Why I’m not terrified of AI

229,169 Listeners

1,089 Listeners

334 Listeners

4,182 Listeners

211 Listeners

6,095 Listeners

9,927 Listeners

511 Listeners

5,512 Listeners

15,272 Listeners

29,246 Listeners

10 Listeners

25 Listeners