
Sign up to save your podcasts
Or


In July, Google put software engineer Blake Lemoine on administrative leave after he claimed that the Google’s chatbot system he was working with had become aware of its own existence. Google dismissed his claims and denied that the application called LaMDA, or Language Model for Dialogue Applications, was sentient. We speak with Dr. Karina Vold, assistant professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology about the feasibility of sentient artificial intelligence.
By WNYC and PRX4.3
712712 ratings
In July, Google put software engineer Blake Lemoine on administrative leave after he claimed that the Google’s chatbot system he was working with had become aware of its own existence. Google dismissed his claims and denied that the application called LaMDA, or Language Model for Dialogue Applications, was sentient. We speak with Dr. Karina Vold, assistant professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology about the feasibility of sentient artificial intelligence.

38,495 Listeners

6,780 Listeners

25,769 Listeners

11,659 Listeners

321 Listeners

3,984 Listeners

1,572 Listeners

937 Listeners

8,439 Listeners

465 Listeners

719 Listeners

1,000 Listeners

309 Listeners

3,784 Listeners

923 Listeners

14,621 Listeners

4,663 Listeners

112,049 Listeners

326 Listeners

1,889 Listeners

7,226 Listeners

16,363 Listeners

15,845 Listeners

1,555 Listeners

1,587 Listeners