
Sign up to save your podcasts
Or
We don't really know how AIs like ChatGPT work...which makes it all the more chilling that they're now leading people down rabbit holes of delusion, actively spreading misinformation, and becoming sycophantic romantic partners. Harvard computer science professor Jonathan Zittrain joins Offline to explain why these large language models lie to us, what we lose by anthropomorphizing them, and how they exploit the dissonance between what we want, and what we think we should want.
For a closed-captioned version of this episode, click here. For a transcript of this episode, please email [email protected] and include the name of the podcast.
4.7
21152,115 ratings
We don't really know how AIs like ChatGPT work...which makes it all the more chilling that they're now leading people down rabbit holes of delusion, actively spreading misinformation, and becoming sycophantic romantic partners. Harvard computer science professor Jonathan Zittrain joins Offline to explain why these large language models lie to us, what we lose by anthropomorphizing them, and how they exploit the dissonance between what we want, and what we think we should want.
For a closed-captioned version of this episode, click here. For a transcript of this episode, please email [email protected] and include the name of the podcast.
86,750 Listeners
24,701 Listeners
111,917 Listeners
25,112 Listeners
8,767 Listeners
4,083 Listeners
9,737 Listeners
7,097 Listeners
12,490 Listeners
11,812 Listeners
7,868 Listeners
5,687 Listeners
4,476 Listeners
12,229 Listeners
2,724 Listeners
15,335 Listeners
10,613 Listeners
2,509 Listeners
616 Listeners
3,362 Listeners
2,908 Listeners
378 Listeners
448 Listeners
181 Listeners
700 Listeners
259 Listeners
127 Listeners
1,536 Listeners