
Sign up to save your podcasts
Or


In July, Google put software engineer Blake Lemoine on administrative leave after he claimed that the Google’s chatbot system he was working with had become aware of its own existence. Google dismissed his claims and denied that the application called LaMDA, or Language Model for Dialogue Applications, was sentient. We speak with Dr. Karina Vold, assistant professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology about the feasibility of sentient artificial intelligence.
By WNYC and PRX4.6
1414 ratings
In July, Google put software engineer Blake Lemoine on administrative leave after he claimed that the Google’s chatbot system he was working with had become aware of its own existence. Google dismissed his claims and denied that the application called LaMDA, or Language Model for Dialogue Applications, was sentient. We speak with Dr. Karina Vold, assistant professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology about the feasibility of sentient artificial intelligence.

11,644 Listeners

325 Listeners

942 Listeners

8,471 Listeners

467 Listeners

310 Listeners

3,792 Listeners

324 Listeners

1,900 Listeners

1,553 Listeners