
Sign up to save your podcasts
Or
We don't really know how AIs like ChatGPT work...which makes it all the more chilling that they're now leading people down rabbit holes of delusion, actively spreading misinformation, and becoming sycophantic romantic partners. Harvard computer science professor Jonathan Zittrain joins Offline to explain why these large language models lie to us, what we lose by anthropomorphizing them, and how they exploit the dissonance between what we want, and what we think we should want.
For a closed-captioned version of this episode, click here. For a transcript of this episode, please email [email protected] and include the name of the podcast.
4.7
21342,134 ratings
We don't really know how AIs like ChatGPT work...which makes it all the more chilling that they're now leading people down rabbit holes of delusion, actively spreading misinformation, and becoming sycophantic romantic partners. Harvard computer science professor Jonathan Zittrain joins Offline to explain why these large language models lie to us, what we lose by anthropomorphizing them, and how they exploit the dissonance between what we want, and what we think we should want.
For a closed-captioned version of this episode, click here. For a transcript of this episode, please email [email protected] and include the name of the podcast.
86,745 Listeners
24,620 Listeners
111,088 Listeners
25,092 Listeners
8,768 Listeners
4,095 Listeners
9,748 Listeners
7,111 Listeners
12,487 Listeners
11,866 Listeners
7,871 Listeners
5,669 Listeners
4,485 Listeners
12,231 Listeners
2,724 Listeners
15,503 Listeners
10,663 Listeners
2,434 Listeners
617 Listeners
3,368 Listeners
2,980 Listeners
378 Listeners
448 Listeners
181 Listeners
689 Listeners
260 Listeners
130 Listeners
1,550 Listeners