
Sign up to save your podcasts
Or


In July, Google put software engineer Blake Lemoine on administrative leave after he claimed that the Google’s chatbot system he was working with had become aware of its own existence. Google dismissed his claims and denied that the application called LaMDA, or Language Model for Dialogue Applications, was sentient. We speak with Dr. Karina Vold, assistant professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology about the feasibility of sentient artificial intelligence.
By WNYC and PRX4.6
1414 ratings
In July, Google put software engineer Blake Lemoine on administrative leave after he claimed that the Google’s chatbot system he was working with had become aware of its own existence. Google dismissed his claims and denied that the application called LaMDA, or Language Model for Dialogue Applications, was sentient. We speak with Dr. Karina Vold, assistant professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology about the feasibility of sentient artificial intelligence.

11,673 Listeners

321 Listeners

944 Listeners

8,480 Listeners

468 Listeners

310 Listeners

3,789 Listeners

326 Listeners

1,908 Listeners

1,553 Listeners