
Sign up to save your podcasts
Or
Full show notes for this episode can be found here: https://www.96layers.ai/p/will-ai-ever-become-a-person
Have you ever considered what it truly means to be a person? I don't mean biologically, but from a philosophical standpoint, like what really defines personhood is a person. Someone that has common sense and can think and reason at a high level. Could a person be defined by having a distinct, consistent personality, or is it rooted in social interactions, like being accountable to others?
As ChatGPT and other large language models have continued to advance, some have asked whether these new AI systems might be considered persons. Earlier this year, the Los Angeles Times published an article titled is it time to start Considering personhood rights for AI chatbots? And even if the answer is no for current AI systems, might we reach a point where we're forced to recognize an AI as a person in its own right?
To help answer these questions, I spoke with Jake Browning, a visiting scientist at New York University's computer science department. Jake received his PhD in philosophy from The New School and has written extensively on the philosophy of artificial intelligence and large language models. I found Jake's ideas on AI personhood thought provoking, and I think you will too.
Full show notes for this episode can be found here: https://www.96layers.ai/p/will-ai-ever-become-a-person
Have you ever considered what it truly means to be a person? I don't mean biologically, but from a philosophical standpoint, like what really defines personhood is a person. Someone that has common sense and can think and reason at a high level. Could a person be defined by having a distinct, consistent personality, or is it rooted in social interactions, like being accountable to others?
As ChatGPT and other large language models have continued to advance, some have asked whether these new AI systems might be considered persons. Earlier this year, the Los Angeles Times published an article titled is it time to start Considering personhood rights for AI chatbots? And even if the answer is no for current AI systems, might we reach a point where we're forced to recognize an AI as a person in its own right?
To help answer these questions, I spoke with Jake Browning, a visiting scientist at New York University's computer science department. Jake received his PhD in philosophy from The New School and has written extensively on the philosophy of artificial intelligence and large language models. I found Jake's ideas on AI personhood thought provoking, and I think you will too.