
Sign up to save your podcasts
Or
We don't really know how AIs like ChatGPT work...which makes it all the more chilling that they're now leading people down rabbit holes of delusion, actively spreading misinformation, and becoming sycophantic romantic partners. Harvard computer science professor Jonathan Zittrain joins Offline to explain why these large language models lie to us, what we lose by anthropomorphizing them, and how they exploit the dissonance between what we want, and what we think we should want.
For a closed-captioned version of this episode, click here. For a transcript of this episode, please email [email protected] and include the name of the podcast.
4.7
20922,092 ratings
We don't really know how AIs like ChatGPT work...which makes it all the more chilling that they're now leading people down rabbit holes of delusion, actively spreading misinformation, and becoming sycophantic romantic partners. Harvard computer science professor Jonathan Zittrain joins Offline to explain why these large language models lie to us, what we lose by anthropomorphizing them, and how they exploit the dissonance between what we want, and what we think we should want.
For a closed-captioned version of this episode, click here. For a transcript of this episode, please email [email protected] and include the name of the podcast.
86,615 Listeners
24,673 Listeners
111,746 Listeners
25,101 Listeners
8,764 Listeners
4,083 Listeners
9,732 Listeners
7,085 Listeners
12,490 Listeners
11,623 Listeners
7,868 Listeners
5,649 Listeners
4,477 Listeners
12,205 Listeners
2,724 Listeners
15,220 Listeners
10,508 Listeners
2,497 Listeners
616 Listeners
3,321 Listeners
2,861 Listeners
378 Listeners
447 Listeners
181 Listeners
700 Listeners
257 Listeners
127 Listeners
1,512 Listeners