
Sign up to save your podcasts
Or


Crossposted from my Substack.
I spent the weekend at Lighthaven, attending the Eleos conference. In this post, I share thoughts and updates as I reflect on talks, papers, and discussions, and put out some of my takes on the topic since I haven't written about this topic before.
I divide my thoughts into three categories: (1) philosophy, (2) legal/social, (3) technical, even though there are unavoidable overlaps. I share relevant paper titles of interest that were either mentioned in presentations or recommended to me during conversations.
(1) Philosophy
The philosophy world is hedging about AI consciousness. What I mean is that even in cases where it's clearly useful to apply the intentional stance, philosophy people have a tendency to avoid defining LLM mentality in intentional terms and resist making any bold claims that could be taken as saying that LLMs are conscious. I am not particularly surprised by this, as I have also offered a pragmatic framework for applying the intentional stance to LLMs and have avoided arguing for anything beyond that. David Chalmers presents a similar picture in a recent paper. Relatedly, it's really worth asking what it is that we apply the intentional stance to. Is [...]
---
Outline:
(00:45) (1) Philosophy
(03:41) (2) Legal/Social
(05:25) (3) Technical
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongCrossposted from my Substack.
I spent the weekend at Lighthaven, attending the Eleos conference. In this post, I share thoughts and updates as I reflect on talks, papers, and discussions, and put out some of my takes on the topic since I haven't written about this topic before.
I divide my thoughts into three categories: (1) philosophy, (2) legal/social, (3) technical, even though there are unavoidable overlaps. I share relevant paper titles of interest that were either mentioned in presentations or recommended to me during conversations.
(1) Philosophy
The philosophy world is hedging about AI consciousness. What I mean is that even in cases where it's clearly useful to apply the intentional stance, philosophy people have a tendency to avoid defining LLM mentality in intentional terms and resist making any bold claims that could be taken as saying that LLMs are conscious. I am not particularly surprised by this, as I have also offered a pragmatic framework for applying the intentional stance to LLMs and have avoided arguing for anything beyond that. David Chalmers presents a similar picture in a recent paper. Relatedly, it's really worth asking what it is that we apply the intentional stance to. Is [...]
---
Outline:
(00:45) (1) Philosophy
(03:41) (2) Legal/Social
(05:25) (3) Technical
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,346 Listeners

2,451 Listeners

8,603 Listeners

4,186 Listeners

93 Listeners

1,598 Listeners

9,935 Listeners

95 Listeners

502 Listeners

5,520 Listeners

15,948 Listeners

545 Listeners

133 Listeners

93 Listeners

467 Listeners