Share The Sentience Institute Podcast
Share to email
Share to Facebook
Share to X
By Sentience Institute
5
1111 ratings
The podcast currently has 23 episodes available.
“I call this the emotional alignment design policy. So the idea is that corporations, if they create sentient machines, should create them so that it's obvious to users that they're sentient. And so they evoke appropriate emotional reactions to sentient users. So you don't create a sentient machine and then put it in a bland box that no one will have emotional reactions to. And conversely, don't create a non sentient machine that people will attach to so much and think it's sentient that they'd be willing to make excessive sacrifices for this thing that isn't really sentient.”
Why should AI systems be designed so as to not confuse users about their moral status? What would make an AI system sentience or moral standing clear? Are there downsides to treating an AI as not sentient even if it’s not sentient? What happens when some theories of consciousness disagree about AI consciousness? Have the developments in large language models in the last few years come faster or slower than Eric expected? Where does Eric think we will see sentience first in AI if we do?
Eric Schwitzgebel is professor of philosophy at University of California, Berkeley, specializing in philosophy of mind and moral psychology. His books include Describing Inner Experience? Proponent Meets Skeptic (with Russell T. Hurlburt), Perplexities of Consciousness, A Theory of Jerks and Other Philosophical Misadventures, and most recently The Weirdness of the World. He blogs at The Splintered Mind.
Topics discussed in the episode:
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show
“Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to actually test out some hypothesis, and learn from the feedback it's getting from the world about these hypotheses in the way children do, it should learn all the time. If you observe young babies and toddlers, they are constantly experimenting. They're like little scientists, you see babies grabbing their feet, and testing whether that's part of my body or not, and learning gradually and very quickly learning all these things. Language models don't do that. They don't explore in this way. They don't have the capacity for interaction in this way.”
How do large language models work? What are the dangers of overclaiming and underclaiming the capabilities of large language models? What are some of the most important cognitive capacities to understand for large language models? Are large language models showing sparks of artificial general intelligence? Do language models really understand language?
Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society and a Lecturer in the Philosophy Department at Columbia University. He completed his DPhil (PhD) in philosophy at the University of Oxford, where he focused on self-consciousness. His interests lie primarily in the philosophy of artificial intelligence and cognitive science. He is particularly interested in assessing the capacities and limitations of deep artificial neural networks and establishing fair and meaningful comparisons with human cognition in various domains, including language understanding, reasoning, and planning.
Topics discussed in the episode:
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show
“Speciesism being socially learned is probably our most dominant theory of why we think we're getting the results that we're getting. But to be very clear, this is super early research. We have a lot more work to do. And it's actually not just in the context of speciesism that we're finding this stuff. So basically we've run some studies showing that while adults will prioritize humans over even very large numbers of animals in sort of tragic trade-offs, children are much more likely to prioritize humans and animals lives similarly. So an adult will save one person over a hundred dogs or pigs, whereas children will save, I think it was two dogs or six pigs over one person. And this was children that were about five to 10 years old. So often when you look at biases in development, so something like minimal group bias, that peaks quite young.”
What does our understanding of human-animal interaction imply for human-robot interaction? Is speciesism socially learned? Does expanding the moral circle dilute it? Why is there a correlation between naturalness and acceptableness? What are some potential interventions for moral circle expansion and spillover from and to animal advocacy?
Matti Wilks is a lecturer (assistant professor) in psychology at the University of Edinburgh. She uses approaches from social and developmental psychology to explore barriers to prosocial and ethical behavior—right now she is interested in factors that shape how we morally value others, the motivations of unusually altruistic groups, why we prefer natural things, and our attitudes towards cultured meat. Matti completed her PhD in developmental psychology at the University of Queensland, Australia, and was a postdoc at Princeton and Yale Universities.
Topics discussed in the episode:
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show
“Robot rights are not the same thing as a set of human rights. Human rights are very specific to a singular species, the human being. Robots may have some overlapping powers, claims, privileges, or immunities that would need to be recognized by human beings, but their grouping or sets of rights will be perhaps very different.”
Can and should robots and AI have rights? What’s the difference between robots and AI? Should we grant robots rights even if they aren’t sentient? What might robot rights look like in practice? What philosophies and other ways of thinking are we not exploring enough? What might human-robot interactions look like in the future? What can we learn from science fiction? Can and should we be trying to actively get others to think of robots in a more positive light?
David J. Gunkel is an award-winning educator, scholar, and author, specializing in the philosophy and ethics of emerging technology. He is the author of over 90 scholarly articles and book chapters and has published twelve internationally recognized books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Of Remixology: Ethics and Aesthetics After Remix (MIT Press 2016), and Robot Rights (MIT Press 2018). He currently holds the position of Distinguished Teaching Professor in the Department of Communication at Northern Illinois University (USA).
Topics discussed in the episode:
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show
“And then you're like, actually, I can't know what it's like to be a bat—again, the problem of other minds, right? There's this fundamental divide between a human mind and a bat, but at least a bat's a mammal. What is it like to be an AI? I have no idea. So I think [mind perception] could make us less sympathetic to them in some sense because it's—I don't know, they're a circuit board, there are these algorithms, and so who knows? I can subjugate them now under the heel of human desire because they're not like me.”
What is mind perception? What do we know about mind perception of AI/robots? Why do people like to use AI for some decisions but not moral decisions? Why would people rather give up hundreds of hospital beds than let AI make moral decisions?
Kurt Gray is a Professor at the University of North Carolina at Chapel Hill, where he directs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding. He studies morality, politics, religion, perceptions of AI, and how best to bridge divides.
Topics discussed in the episode:
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show
And for an applied ethics perspective, I think the most important thing is if we want to minimize suffering in the world, and if we want to minimize animal suffering, we should always, err on the side of caution, we should always be on the safe side.
Should we advocate for a moratorium on the development of artificial sentience? What might that look like, and what would be the challenges?
Thomas Metzinger was a full professor of theoretical philosophy at the Johannes Gutenberg Universitat Mainz until 2022, and is now a professor emeritus. Before that, he was the president of the German cognitive science society from 2005 to 2007, president of the association for the scientific study of consciousness from 2009 to 2011, and an adjunct fellow at the Frankfurt Institute for advanced studies since 2011. He is also a co-founder of the German Effective Altruism Foundation, president of the Barbara Wengeler Foundation, and on the advisory board of the Giordano Bruno Foundation. In 2009, he published a popular book, The Ego Tunnel: The Science of the Mind and the Myth of the Self, which addresses a wider audience and discusses the ethical, cultural, and social consequences of consciousness research. From 2018 to 2020 Metzinger worked as a member of the European Commission's high level expert group on artificial intelligence.
Topics discussed in the episode:
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show
“We think that the most important thing right now is capacity building. We’re not so much focused on having impact now or in the next year, we’re thinking about the long term and the very big picture… Now, what exactly does capacity building mean? It can simply mean getting more people involved… I would frame it more in terms of building a healthy community that’s stable in the long term… And one aspect that’s just as important as the movement building is that we need to improve our knowledge of how to best reduce suffering. You could call it ‘wisdom building’… And CRS aims to contribute to [both] through our research… Some people just naturally tend to be more inclined to explore a lot of different topics… Others have maybe more of a tendency to dive into something more specific and dig up a lot of sources and go into detail and write a comprehensive report and I think both these can be very valuable… What matters is just that overall your work is contributing to progress on… the most important questions of our time.”
There are many different ways that we can reduce suffering or have other forms of positive impact. But how can we increase our confidence about which actions are most cost-effective? And what can people do now that seems promising?
Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.
Topics discussed in the episode:
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show
“If some beings are excluded from moral consideration then the results are usually quite bad, as evidenced by many forms of both current and historical suffering… I would definitely say that those that don’t have any sort of political representation or power are at risk. That’s true for animals right now; it might be true for artificially sentient beings in the future… And yeah, I think that is a plausible priority. Another candidate would be to work on other broad factors to improve the future such as by trying to fix politics, which is obviously a very, very ambitious goal… [Another candidate would be] trying to shape transformative AI more directly. We’ve talked about the uncertainty there is regarding the development of artificial intelligence, but at least there’s a certain chance that people are right about this being a very crucial technology; and if so, shaping it in the right way is very important obviously.”
Expanding humanity’s moral circle to include farmed animals and other sentient beings is a promising strategy for reducing the risk of astronomical suffering in the long-term future. But are there other causes that we could focus on that might be better? And should reducing future suffering actually be our goal?
Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.
Topics discussed in the episode:
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show
We [Faunalytics] put out a lot of things in 2020. Some of the favorites that I [Jo] have, probably top of the list, I’m really excited about our animal product impact scales, where we did a lot of background research to figure out and estimate the impact of replacing various animal products with plant-based or cultivated alternatives. Apart from that, we’ve also done some research on people’s beliefs about chickens and fish that’s intended as a starting point on a program of research so that we can look at the best ways to advocate for those smaller animals… [Rethink Priorities’] bigger projects within farmed animal advocacy include work on EU legislation, in particular our view of how much do countries comply with EU animal welfare laws and what we can do to increase compliance. Jason Schukraft wrote many articles about topics like how the moral value of animals differs across species. There has been a review of shrimp farming. I [Saulius] finished an article in which I estimate global captive vertebrate numbers. And Abraham Rowe posted an article about insects raised for food and feed which I think is a very important topic.
There have been many new research posts relevant to animal advocacy in 2020. But which are the most important for animal advocates to pay close attention to? And what sorts of research should we prioritize in the future?
Jo Anderson is the Research Director at Faunalytics, a nonprofit that conducts, summarizes, and disseminates research relevant to animal advocacy. Saulius Šimčikas is a Senior Staff Researcher at Rethink Priorities, a nonprofit that conducts research relevant to farmed animal advocacy, wild animals, and several other cause areas associated with the effective altruism community.
Topics discussed in the episode:
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show
“Why inner transformation, why these practices are also built into model: unless we root out the root cause of the issue, which is disconnection, which is a lack of understanding that we are interrelated, and therefore I have an inherent responsibility to show up in the world with kindness and compassion and to reduce the harm and the suffering that I cause in the world. Unless we’re able to do that, these problems are still going to exist. The issues of race relations still exist. How many years have people been fighting for this? The issue of homophobia, of racism, whatever it is, they still exist; why do they still exist after so much work, after so much money has been poured into it, after so many lives have been lost, so many people have been beaten and spilled their blood? They’ve shed their tears for these issues. Because unless we address the underlying schisms within human consciousness, within us as individuals, it’s still going to exist; it’s still going to be there. Direct impact, indirect impact, I just want to see impact and if you’re someone who wants to make an impact, I want to hear from you.
Animals are harmed in all continents in the world. But how can we support the advocates seeking to help them? And what sort of support is most needed?
Ajay Dahiya is the executive director of The Pollination Project, an organisation which funds and supports grassroots advocates and organizations working towards positive social change, such as to help animals.
Topics discussed in the episode:
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show
The podcast currently has 23 episodes available.
808 Listeners
1,519 Listeners
2,623 Listeners
26,333 Listeners
448 Listeners
289 Listeners
392 Listeners
886 Listeners
4,022 Listeners
1,367 Listeners
430 Listeners
211 Listeners
387 Listeners
342 Listeners
347 Listeners