Machine Learning Street Talk (MLST)

#90 - Prof. DAVID CHALMERS - Consciousness in LLMs [Special Edition]


Listen Later

Support us! https://www.patreon.com/mlst

David Chalmers is a professor of philosophy and neural science at New York University, and an honorary professor of philosophy at the Australian National University. He is the co-director of the Center for Mind, Brain, and Consciousness, as well as the PhilPapers Foundation. His research focuses on the philosophy of mind, especially consciousness, and its connection to fields such as cognitive science, physics, and technology. He also investigates areas such as the philosophy of language, metaphysics, and epistemology. With his impressive breadth of knowledge and experience, David Chalmers is a leader in the philosophical community.


The central challenge for consciousness studies is to explain how something immaterial, subjective, and personal can arise out of something material, objective, and impersonal. This is illustrated by the example of a bat, whose sensory experience is much different from ours, making it difficult to imagine what it's like to be one. Thomas Nagel's "inconceivability argument" has its advantages and disadvantages, but ultimately it is impossible to solve the mind-body problem due to the subjective nature of experience. This is further explored by examining the concept of philosophical zombies, which are physically and behaviorally indistinguishable from conscious humans yet lack conscious experience. This has implications for the Hard Problem of Consciousness, which is the attempt to explain how mental states are linked to neurophysiological activity. The Chinese Room Argument is used as a thought experiment to explain why physicality may be insufficient to be the source of the subjective, coherent experience we call consciousness. Despite much debate, the Hard Problem of Consciousness remains unsolved. Chalmers has been working on a functional approach to decide whether large language models are, or could be conscious. 


Filmed at #neurips22


Discord: https://discord.gg/aNPkGUQtc5

YT: https://youtu.be/T7aIxncLuWk


TOC;

[00:00:00] Introduction

[00:00:40] LLMs consciousness pitch

[00:06:33] Philosophical Zombies

[00:09:26] The hard problem of consciousness

[00:11:40] Nagal's bat and intelligibility

[00:21:04] LLM intro clip from NeurIPS

[00:22:55] Connor Leahy on self-awareness in LLMs

[00:23:30] Sneak peek from unreleased show - could consciousness be a submodule?

[00:33:44] SeppH

[00:36:15] Tim interviews David at NeurIPS (functionalism / panpsychism / Searle)

[00:45:20] Peter Hase interviews Chalmers (focus on interpretability/safety)


Panel:

Dr. Tim Scarfe

Dr. Keith Duggar


Contact David;

https://mobile.twitter.com/davidchalmers42

https://consc.net/


References;


Could a Large Language Model Be Conscious? [Chalmers NeurIPS22 talk] 

https://nips.cc/media/neurips-2022/Slides/55867.pdf


What Is It Like to Be a Bat? [Nagel]

https://warwick.ac.uk/fac/cross_fac/iatl/study/ugmodules/humananimalstudies/lectures/32/nagel_bat.pdf


Zombies

https://plato.stanford.edu/entries/zombies/


zombies on the web [Chalmers]

https://consc.net/zombies-on-the-web/


The hard problem of consciousness [Chalmers]

https://psycnet.apa.org/record/2007-00485-017


David Chalmers, "Are Large Language Models Sentient?" [NYU talk, same as at NeurIPS]

https://www.youtube.com/watch?v=-BcuCmf00_Y

...more
View all episodesView all episodes
Download on the App Store

Machine Learning Street Talk (MLST)By Machine Learning Street Talk (MLST)

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

83 ratings


More shows like Machine Learning Street Talk (MLST)

View all
Data Skeptic by Kyle Polich

Data Skeptic

470 Listeners

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) by Sam Charrington

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

434 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

296 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

324 Listeners

Practical AI by Practical AI LLC

Practical AI

190 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

201 Listeners

Last Week in AI by Skynet Today

Last Week in AI

281 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

354 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

125 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

190 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

63 Listeners

"Upstream" with Erik Torenberg by Erik Torenberg

"Upstream" with Erik Torenberg

64 Listeners

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

424 Listeners

AI + a16z by a16z

AI + a16z

33 Listeners

Training Data by Sequoia Capital

Training Data

36 Listeners