Machine Learning Street Talk (MLST)

(Music Removed) #90 - Prof. DAVID CHALMERS - Consciousness in LLMs [Special Edition]

12.19.2022 - By Machine Learning Street Talk (MLST)Play

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

Support us! https://www.patreon.com/mlst

(On the main version we released; the music was a tiny bit too loud in places, and some pieces had percussion which was a bit distracting -- here is a version with all music removed so you have the option! )

David Chalmers is a professor of philosophy and neural science at New York University, and an honorary professor of philosophy at the Australian National University. He is the co-director of the Center for Mind, Brain, and Consciousness, as well as the PhilPapers Foundation. His research focuses on the philosophy of mind, especially consciousness, and its connection to fields such as cognitive science, physics, and technology. He also investigates areas such as the philosophy of language, metaphysics, and epistemology. With his impressive breadth of knowledge and experience, David Chalmers is a leader in the philosophical community.

The central challenge for consciousness studies is to explain how something immaterial, subjective, and personal can arise out of something material, objective, and impersonal. This is illustrated by the example of a bat, whose sensory experience is much different from ours, making it difficult to imagine what it's like to be one. Thomas Nagel's "inconceivability argument" has its advantages and disadvantages, but ultimately it is impossible to solve the mind-body problem due to the subjective nature of experience. This is further explored by examining the concept of philosophical zombies, which are physically and behaviorally indistinguishable from conscious humans yet lack conscious experience. This has implications for the Hard Problem of Consciousness, which is the attempt to explain how mental states are linked to neurophysiological activity. The Chinese Room Argument is used as a thought experiment to explain why physicality may be insufficient to be the source of the subjective, coherent experience we call consciousness. Despite much debate, the Hard Problem of Consciousness remains unsolved. Chalmers has been working on a functional approach to decide whether large language models are, or could be conscious. 

Filmed at #neurips22

Discord: https://discord.gg/aNPkGUQtc5

Pod: https://anchor.fm/machinelearningstreettalk/episodes/90---Prof--DAVID-CHALMERS---Slightly-Conscious-LLMs-e1sej50

TOC;

[00:00:00] Introduction

[00:00:40] LLMs consciousness pitch

[00:06:33] Philosophical Zombies

[00:09:26] The hard problem of consciousness

[00:11:40] Nagal's bat and intelligibility 

[00:21:04] LLM intro clip from NeurIPS

[00:22:55] Connor Leahy on self-awareness in LLMs

[00:23:30] Sneak peek from unreleased show - could consciousness be a submodule?

[00:33:44] SeppH

[00:36:15] Tim interviews David at NeurIPS (functionalism / panpsychism / Searle)

[00:45:20] Peter Hase interviews Chalmers (focus on interpretability/safety)

Panel:

Dr. Tim Scarfe

Dr. Keith Duggar

Contact David;

https://mobile.twitter.com/davidchalmers42

https://consc.net/

References;

Could a Large Language Model Be Conscious? [Chalmers NeurIPS22 talk]

https://nips.cc/media/neurips-2022/Slides/55867.pdf

What Is It Like to Be a Bat? [Nagel]

https://warwick.ac.uk/fac/cross_fac/iatl/study/ugmodules/humananimalstudies/lectures/32/nagel_bat.pdf

Zombies

https://plato.stanford.edu/entries/zombies/

zombies on the web [Chalmers]

https://consc.net/zombies-on-the-web/

The hard problem of consciousness [Chalmers]

https://psycnet.apa.org/record/2007-00485-017

David Chalmers, "Are Large Language Models Sentient?" [NYU talk, same as at NeurIPS]

https://www.youtube.com/watch?v=-BcuCmf00_Y

More episodes from Machine Learning Street Talk (MLST)