
Sign up to save your podcasts
Or
But my background is as a digital product designer, I’ve worked in government, startups dot-coms. I spent three years heading up design at Twitter UK. And since then, I have focused pretty much exclusively on the ethics of technology and the ethics of design. And as you mentioned at the start, recently released a book about this sort of combination of my work in that field. And I’m now trying to see how I take that to the world. And how I help companies make better ethical decisions, and avoid some of the harms that have sadly become all too apparent, I think, in our field.
So I think there are some focal points within ethics that are understandable, that may be too narrow. So we see a lot of people within this field, say looking at the ethics of attention, and you know, all this panic about addictive technologies and devices that are consuming all our free time. Now, that’s an important issue. But it’s not the only issue. There are plenty of other ethical issues.
So I’m keen not to be too boxed into a specific section, if you like, a specific problem, or indeed a specific approach. For me, it’s really about challenging these ideologies and the assumptions that have for too long gone unchecked, I suppose, in our field. And entering into a proper discussion about how we change things for the better. I don’t think we’re at the stage yet where we can simply just take an ethical design process and imprint it upon technology teams. I don’t think we have that level of maturity in the discussion yet. So it’s my job, hopefully, to stimulate some of that conversation.
And so, when I see another one of these being projected, I try to view it charitably, but I don’t think it’s going to really change anything. If a previous 50 didn’t work, what use is another one going to be? I think there is a danger with approaches like codes of ethics and the like that we get this checklist approach. That we almost end up with ethics becoming sort of what’s happened with accessibility.
Accessibility on the web, you know, since the release of the WCAG guidelines, they’ve helped and they’ve hindered. They’ve helped raise the profile of the issue, but they’ve also made accessibility appear to be a downstream development issue. You know, tick some boxes at the end, you know, check your contrast ratio is your … Now double AA compliant, job done, accessibility finished, let’s move on.
And I don’t think that would be beneficial to have ethics as a checklist exercise at the end of the existing design process, the existing product development process, because it’s that process itself that we need to examine, rather than just tack on a code at the end and say “Well, did we comply with everything that we said we were going to?”
So I can understand the impulse to do that kind of thing. And there may still be a place for some kind of codification, but we’ve got to have those hard conversations first, rather than just throw that up as a one size fits all answer.
Very entertaining, perhaps, but not necessarily grappling with the real ethical issues that matter now or in the future. As someone who spends a lot of time thinking about these things, what are the ethical issues that really should matter to us today and going forward?
But on a more, I suppose you’d say, a more proximate scale, things that are more readily apparent harms that are happening right now, we obviously have a lot of harms around use of data and the effects of algorithms, often opaque algorithms. You know, the classic black box complaint that goes with a lot of say machine learning systems that we don’t know why they take the decisions that they do.
And I fairly familiar with the idea that they replicate the biases within not just the teams that create them, but also the societies that creates the historic data that feeds and trains these algorithms. So they can essentially exacerbate and concretize these existing biases in ways that look objective and ways that look completely neutral.
I find particularly interested in the effects of persuasive systems, persuasive algorithms. Karen Young, who’s a legal scholar here in London, talks about the advent of an era of hyper nudge, taking the idea of nudging systems to the extreme, where they’re networked and dynamic and highly personalized. And they could be irresistible manipulators. And we won’t know essentially the presence of these systems until it’s too late.
We’ve started already to see, of course, in the political sphere, the power of bots and of human networks of trolls working in collaboration to try and change mindsets. What if we put that kind of persuasive power and dialed it up and amplify its capabilities and put it in the hands of more and more people, that could have phenomenally challenging implications for society and even for free will.
I am also interested in how technology can be weaponized. And I mean that in two senses. I mean it in terms of how it can be misused by bad actors. So of course, hackers, trolls, et cetera. And to an extent, some governments are now using technology as a means of force to compel certain behaviors, or to take advantage of weaknesses and systems to their own advantage and to the disadvantage of others.
And then, of course, there is, I suppose, what you’d call more visible and above the line weaponization of technology, which is still fraught with ethical difficulties. We look at what’s happened say in Google with their project Maven program, which caused all sorts of internal friction. And then, I think it was yesterday that Microsoft announced that they had just won a large defense contract to provide HoloLens technology to the US Army.
And so, the weaponization of these technologies may not have been intended. We may be playing with things that we think have fascinating implications. And we want to see where that technology takes us. And then we find later, oh, actually this could be used for significant harm, but we didn’t plan for it, or we didn’t have an opportunity for the people working on that technology to object and say “Well, I’m not actually comfortable working on a military project, for instance.”
So it’s all these unintended consequences of technologies and the externalities of technologies that fall on people that we just didn’t consider. I think that’s where some of the more pressing and slightly less far fetched perhaps ethical challenges lie.
There’s not a wide precedent for it. I’m sure it’s happened, but there certainly isn’t a standard that I’m familiar with, and I would suspect most people are. I mean, is this a function that should be like a lawyer? You know, that’s generally sort of an outsider, specialized thing that’s coming in, in expert situations? Or is it more like a designer-researcher that’s sort of part of a team on an ongoing basis? How do we structurally make ethics the appropriate part of the things that we’re doing in our organizations?
So some people think, well, maybe that’s a model that we take, and we transfer to large tech companies. I’m not entirely convinced. There maybe some cases in which that works. But I think tech industry ideologies are just so resistant to anything that looks like a committee. That anything that feels like academia and the sort of heavy burdensome processes.
So I think, in reality, we have to tread more likely to begin with, unless there are really significant harms that could result. I’d say, if you’re working on weapon systems, you probably need an IRB, right? You need a proper committee to validate the decisions, the ethical choices in front of you. But for every day tech work, I think there is certainly benefit in having, yep, legal on board. You know, there will absolutely be lots of lawyers, general counsel, and so on, who have an interest in this, in both senses of that word.
But most of the change really has to come, I think, from inside the company. Now, I may be able to … And we’ll find out whether this true, I may be able to stimulate some of that and to help guide those companies. But ultimately, I think a failure state for ethics is to appoint a single person as the ethical Oracle. And say “Well, let’s get this person in, then they give their binding view on whether this is a moral act or not.” It doesn’t scale. And it also could be quite a technocratic way of tackling what should be more of a democratic, more of a public-orientated decision.
So I think we have to find a way to approach ethics as an ethos, a mindset that we bring to the whole design process, the whole product development process, so that it raises questions throughout our work, rather than, as I say, just a checklist at the end or a legal compliance issue.
As for the structures of that specifically, like do we need an onsite ethicist within the team? Or do we train designers in this, I think designers make for good vectors for this kind of work. I think they’re very attuned to the idea of the end user having certain sorts of rights, for example. But I am only just begun getting to see the patterns that different companies are trying.
And what I’m seeing at the moment is there is very little in common. You have some companies setting up entire teams. You have some people leading it, some companies leading it from product. You have some companies getting it from design, some trying to hire ethicists out of university faculties. And I don’t yet have the data to know which of those approaches works. I’m glad they’re trying all these approaches because hopefully in a year, we’ll have a better idea of which of those have been the most successful.
But my hunch is it’s going to be much more meaningful to have some kind of, you know, like a retainer relationship, or something where someone like myself can come in and start off some initiatives, and then equip the team with some of the skills they need to make those changes. But then come in and check for progress. Because I can tell you from experience that pushing for ethical change is difficult work. You’re swimming against a very heavy tide a lot of the time.
So you have to have persistence. You can’t be too dissuaded if your grand plans don’t work. So I think a kind of longitudinal interaction, maybe over the course of three, six, 12 months is where I’m trying to head. For me, there’s obviously, you know, I’ve got to position that appropriately and convince people that there’s value in that. But, you know, ethics is for life, not just the Christmas, all these sorts of things. I don’t want to have a situation in 12-18 months where we’re saying “Oh, we’re still talking about that ethics thing?” It has to be a bit more drawn into the way that we approach these problems.
And for me, the ethical harms of emergent technology ramp up quite sharply because over the next 10 to 20 years, we’re going to be demanding, as an industry, we’re going to be demanding a huge amount of trust from our users. We’ll ask them to trust us with the safety of their vehicles and the homes and even their families. And I don’t think we’ve yet earned the trust that we’re going to request. So my focus is trying to illuminate some of the potential ethical challenges within that territory within that emerging fields. But then to interlace that with what we already know about ethics.
I think the tech industry has this sometimes useful, but often infuriating belief that we’re the first people on any new shore. That we are beta testing this unique future. And therefore, we have to solve things from first principles. But of course, ethics as a field of inquiry has been around for a couple of millennia. Even the philosophy of technology, science and technology studies, these fields have been around for decades. And the industry really hasn’t paid them the attention that perhaps it should.
So I see my job as trying to introduce some of the maybe theoretical ideas, but introducing them in a way that’s practical to designers and product managers and technologists, so they can actually start to have those discussions and make those changes within their own companies. So I’m trying to, if you like, translate between those those two worlds. So if I have to say there’s a particular focus of the book, it’s that.
But I have structured the work in a way that it also is sort of somewhat chronological, working from the most readily apparent harms, such as, as I mentioned before data and digital redlining as it’s known, bias, things like that through to perhaps some of the larger, but further away threats such as the risks to the economy, the risks of autonomous war, and so on. Those sorts of things tend to appear later chapters partly because I decided you need to build upon some of the knowledge we introduced earlier in the book to get to that point.
You know, pivoting to capitalism. So capitalism is under increased scrutiny and critique in ways that overlap with issues of technology and of course, ethics. A specific example recently is how the ad-funded business model’s being blamed for ethical lapses. And Cennydd, I know you have a different take on this. I’d love to hear about it.
I think tracking or targeting, that’s really where the ethical risk lies. Now, advertising can be seen as a promise, you know, a value exchange that we agree to. You know, I get some valuable technology and in exchange, I give up, you know, my attention. I expect, I believe that I’m going to see some adverts on my device, or in my podcast, or whatever it might be. I think if we reject that outright as a business model, which some people do, then really the only business model that leaves us is the consumer-funded technology model. And that has a lot going for it. But it is also potentially highly discriminatory.
One of the great things the advertising model has brought us is that it’s put technology in the hands of billions for free. And I don’t want us to lose that. I think it would be a deeply regressive step to conclude that the only ethical technology is that which is funded by the end user because, of course, then you’re excluding the poor, developing nations, those without credit, and so on. So I would hate for us to throw the baby out with the bathwater.
I do think, as I say though, we have to think more carefully about tracking. And tracking definitely does have some ethical challenges. Sometimes people make the inference then. They say “Well, okay, but the tracking comes from the needs to advertise. You know, you have to track people so you can advertise more accurately to them and get better return for that.”
My counter to that is the value of tracking has now gone beyond the advertising case. Everyone sees value in tracking. So tracking helps any company, whether it’s ad-funded or not, helps us generate analytics about the success of our product, see what’s working or what isn’t in the market. And also, it’s particularly useful for generating training data. We want to understand user behavior so that we can train machine learning systems, AI systems upon that data to create new products and services.
So tracking now has value to pretty much any company, regardless of the funding model. So this cliche of if you’re not paying for the product, you are the product being sold, I would take to even perhaps a more slightly dystopian perspective and say you are always the product. It doesn’t matter who’s paying for it. And so, we’re trying to make a change that isn’t focusing, I think, on the right issues, which is how do we combat some of these ideologies of datafication, of over quantification, and the exploitations that might lurk within that. I think that’s where the real ethical focus needs to go, rather than on the advertising case itself.
But on the other, you know, I remember feeling true, shocking outrage when there was a concept video for a Boston Dynamics robot that was shaped like an animal. This was maybe three years ago, and they had the engineers in this concept video beating it up, pushing it down, doing things that I would consider inhumane. And they were doing it to this robot, and I was upset at them and made sort of character judgments about the company and the people participating in the video based on those behaviors, sort of surprisingly so perhaps. Robot rights. Talk a little about that.
You can look at something like Sophia, this robot that you’ve almost certainly seen. It’s this kind of rubber-faced, it’s a marionette essentially. It’s a puppet. It has almost no real robotic qualities or AI qualities. But it’s now been given citizenship of the Kingdom of Saudi Arabia. Some people pointed out that that actually afford it certain rights that women in that nation didn’t have.
And things like that frustrate me because that thing should absolutely not to have any rights. It has nothing approaching what we might call consciousness. And consciousness is probably the point at which these issues really start to come to the fore. At some point, we might have a machine that has something approaching consciousness. And if that happens, then yes, maybe we do have to give this thing some legal personhood, or even moral personhood, which would then maybe suggest certain rights. You know, we have the Declaration of Human Rights, maybe a lot of those would have to apply, maybe with some modification in that situation.
So we have, for instance, rights against ownership of persons. If we get to a point where a machine has demonstrated sufficient levels of consciousness or something comparable that we say it deserves personhood, then we can’t own those things anymore. That’s called slavery. We have directives against that kind of thing. We probably have to consider can we actually make this thing do the work that we built, essentially, this future of robotics on? Maybe suddenly it has to have a say and opportunity to say “I won’t do that work.”
Now, it’s tempting to say the way around this is well, we just won’t make machines that have any kind of consciousness, right? We won’t program in consciousness subroutines. But as a friend of mine, who’s a philosopher and a science and technology studies academic, called Damien Williams, and he makes a very good point that consciousness may emerge accidentally. It may not be something that we simply excise and say “Well, we won’t put that particular module into the system.” It may be emergent. It may be very hard for us to recognize because that consciousness is probably going to manifest in a different manner to human or animal consciousness.
So there’s a great risk that we actually start infringing upon what might be rights of that entity without even realizing that this is happening. So it’s a really thorny and controversial topic, and one that I’m very glad there are proper credentialed philosophers looking at. I’ve done obviously plenty of research into this, but they’re far ahead of me, and I’m very glad that folks are working on it.
Just with respect to your point about the big dog, I think it was the Boston Dynamics robot. Yes, I mean, that’s fascinating and I think there is … Maybe I have a view that’s a bit more sentimental than most. Some people would say, well, it’s fine. It’s not sentient. It’s not conscious. It’s not actually suffering in any way. But I think it’s still a a mistake to maltreat advanced robots like that. Even things like Alexa or Siri. I think it feels morally correct to me to at least be somewhat polite to them and to not swear at them and harass them. At some point, they’ll be some hybrid entity anyway, they’ll be some centering where these things are combined with humans, some intelligence combination there. And if you insult one, you’ll insult the other. So that feels like something we shouldn’t do.
But I also think we should treat these things so that we don’t brutalize ourselves, if you see what I mean. I think if we start to desensitize ourselves to doing harm to other entities, be they robots or be they animals, whatever it is, that line maybe between artificial and a real life may start to blur. But I think if we start to desensitize ourselves to that, if we lose the violence of violence, then I think that starts to say worrying things about our society. I would say not everyone agrees with that. Perhaps that’s my sentimental view on that topic.
Now back in Bentham’s time, that was the view that he was challenging. Back in the 1700s, that didn’t really seem to be accepted that animals could suffer in the same way. But clearly, they exhibit preferences for certain states, certain behaviors, certain treatments, and you could argue that suffering results from acting against those preferences.
You’re absolutely right to point out a fierce contradiction in a lot of ethics in the way we think about how we want to treat these emerging artificial intelligences and the way that we already treat living sentient species, such as animals. And I think anyone who’s interested in this area owes it to themself to consider their views on say animal ethics, and whether actually that’s an industry that they feel able to support.
Now, that’s not an easy decision to take, and I’m not saying, for instance, that anyone who claims to be interested in robot ethics by logical extension has to become a vegan, for instance. But we owe it to ourselves to recognize as you point out, there are significant contradictions in those mentalities. And we have to try to find a way to resolve those.
You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM, and Google Play. And if you’d like to follow us outside of the show, you can follow me on twitter at @jonfollett. That’s J-O-N F-O-L-L-E-T-T. And of course, the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at GoInvo.com. That’s G-O-I-N-V-O.com. Dirk?
4.7
1010 ratings
But my background is as a digital product designer, I’ve worked in government, startups dot-coms. I spent three years heading up design at Twitter UK. And since then, I have focused pretty much exclusively on the ethics of technology and the ethics of design. And as you mentioned at the start, recently released a book about this sort of combination of my work in that field. And I’m now trying to see how I take that to the world. And how I help companies make better ethical decisions, and avoid some of the harms that have sadly become all too apparent, I think, in our field.
So I think there are some focal points within ethics that are understandable, that may be too narrow. So we see a lot of people within this field, say looking at the ethics of attention, and you know, all this panic about addictive technologies and devices that are consuming all our free time. Now, that’s an important issue. But it’s not the only issue. There are plenty of other ethical issues.
So I’m keen not to be too boxed into a specific section, if you like, a specific problem, or indeed a specific approach. For me, it’s really about challenging these ideologies and the assumptions that have for too long gone unchecked, I suppose, in our field. And entering into a proper discussion about how we change things for the better. I don’t think we’re at the stage yet where we can simply just take an ethical design process and imprint it upon technology teams. I don’t think we have that level of maturity in the discussion yet. So it’s my job, hopefully, to stimulate some of that conversation.
And so, when I see another one of these being projected, I try to view it charitably, but I don’t think it’s going to really change anything. If a previous 50 didn’t work, what use is another one going to be? I think there is a danger with approaches like codes of ethics and the like that we get this checklist approach. That we almost end up with ethics becoming sort of what’s happened with accessibility.
Accessibility on the web, you know, since the release of the WCAG guidelines, they’ve helped and they’ve hindered. They’ve helped raise the profile of the issue, but they’ve also made accessibility appear to be a downstream development issue. You know, tick some boxes at the end, you know, check your contrast ratio is your … Now double AA compliant, job done, accessibility finished, let’s move on.
And I don’t think that would be beneficial to have ethics as a checklist exercise at the end of the existing design process, the existing product development process, because it’s that process itself that we need to examine, rather than just tack on a code at the end and say “Well, did we comply with everything that we said we were going to?”
So I can understand the impulse to do that kind of thing. And there may still be a place for some kind of codification, but we’ve got to have those hard conversations first, rather than just throw that up as a one size fits all answer.
Very entertaining, perhaps, but not necessarily grappling with the real ethical issues that matter now or in the future. As someone who spends a lot of time thinking about these things, what are the ethical issues that really should matter to us today and going forward?
But on a more, I suppose you’d say, a more proximate scale, things that are more readily apparent harms that are happening right now, we obviously have a lot of harms around use of data and the effects of algorithms, often opaque algorithms. You know, the classic black box complaint that goes with a lot of say machine learning systems that we don’t know why they take the decisions that they do.
And I fairly familiar with the idea that they replicate the biases within not just the teams that create them, but also the societies that creates the historic data that feeds and trains these algorithms. So they can essentially exacerbate and concretize these existing biases in ways that look objective and ways that look completely neutral.
I find particularly interested in the effects of persuasive systems, persuasive algorithms. Karen Young, who’s a legal scholar here in London, talks about the advent of an era of hyper nudge, taking the idea of nudging systems to the extreme, where they’re networked and dynamic and highly personalized. And they could be irresistible manipulators. And we won’t know essentially the presence of these systems until it’s too late.
We’ve started already to see, of course, in the political sphere, the power of bots and of human networks of trolls working in collaboration to try and change mindsets. What if we put that kind of persuasive power and dialed it up and amplify its capabilities and put it in the hands of more and more people, that could have phenomenally challenging implications for society and even for free will.
I am also interested in how technology can be weaponized. And I mean that in two senses. I mean it in terms of how it can be misused by bad actors. So of course, hackers, trolls, et cetera. And to an extent, some governments are now using technology as a means of force to compel certain behaviors, or to take advantage of weaknesses and systems to their own advantage and to the disadvantage of others.
And then, of course, there is, I suppose, what you’d call more visible and above the line weaponization of technology, which is still fraught with ethical difficulties. We look at what’s happened say in Google with their project Maven program, which caused all sorts of internal friction. And then, I think it was yesterday that Microsoft announced that they had just won a large defense contract to provide HoloLens technology to the US Army.
And so, the weaponization of these technologies may not have been intended. We may be playing with things that we think have fascinating implications. And we want to see where that technology takes us. And then we find later, oh, actually this could be used for significant harm, but we didn’t plan for it, or we didn’t have an opportunity for the people working on that technology to object and say “Well, I’m not actually comfortable working on a military project, for instance.”
So it’s all these unintended consequences of technologies and the externalities of technologies that fall on people that we just didn’t consider. I think that’s where some of the more pressing and slightly less far fetched perhaps ethical challenges lie.
There’s not a wide precedent for it. I’m sure it’s happened, but there certainly isn’t a standard that I’m familiar with, and I would suspect most people are. I mean, is this a function that should be like a lawyer? You know, that’s generally sort of an outsider, specialized thing that’s coming in, in expert situations? Or is it more like a designer-researcher that’s sort of part of a team on an ongoing basis? How do we structurally make ethics the appropriate part of the things that we’re doing in our organizations?
So some people think, well, maybe that’s a model that we take, and we transfer to large tech companies. I’m not entirely convinced. There maybe some cases in which that works. But I think tech industry ideologies are just so resistant to anything that looks like a committee. That anything that feels like academia and the sort of heavy burdensome processes.
So I think, in reality, we have to tread more likely to begin with, unless there are really significant harms that could result. I’d say, if you’re working on weapon systems, you probably need an IRB, right? You need a proper committee to validate the decisions, the ethical choices in front of you. But for every day tech work, I think there is certainly benefit in having, yep, legal on board. You know, there will absolutely be lots of lawyers, general counsel, and so on, who have an interest in this, in both senses of that word.
But most of the change really has to come, I think, from inside the company. Now, I may be able to … And we’ll find out whether this true, I may be able to stimulate some of that and to help guide those companies. But ultimately, I think a failure state for ethics is to appoint a single person as the ethical Oracle. And say “Well, let’s get this person in, then they give their binding view on whether this is a moral act or not.” It doesn’t scale. And it also could be quite a technocratic way of tackling what should be more of a democratic, more of a public-orientated decision.
So I think we have to find a way to approach ethics as an ethos, a mindset that we bring to the whole design process, the whole product development process, so that it raises questions throughout our work, rather than, as I say, just a checklist at the end or a legal compliance issue.
As for the structures of that specifically, like do we need an onsite ethicist within the team? Or do we train designers in this, I think designers make for good vectors for this kind of work. I think they’re very attuned to the idea of the end user having certain sorts of rights, for example. But I am only just begun getting to see the patterns that different companies are trying.
And what I’m seeing at the moment is there is very little in common. You have some companies setting up entire teams. You have some people leading it, some companies leading it from product. You have some companies getting it from design, some trying to hire ethicists out of university faculties. And I don’t yet have the data to know which of those approaches works. I’m glad they’re trying all these approaches because hopefully in a year, we’ll have a better idea of which of those have been the most successful.
But my hunch is it’s going to be much more meaningful to have some kind of, you know, like a retainer relationship, or something where someone like myself can come in and start off some initiatives, and then equip the team with some of the skills they need to make those changes. But then come in and check for progress. Because I can tell you from experience that pushing for ethical change is difficult work. You’re swimming against a very heavy tide a lot of the time.
So you have to have persistence. You can’t be too dissuaded if your grand plans don’t work. So I think a kind of longitudinal interaction, maybe over the course of three, six, 12 months is where I’m trying to head. For me, there’s obviously, you know, I’ve got to position that appropriately and convince people that there’s value in that. But, you know, ethics is for life, not just the Christmas, all these sorts of things. I don’t want to have a situation in 12-18 months where we’re saying “Oh, we’re still talking about that ethics thing?” It has to be a bit more drawn into the way that we approach these problems.
And for me, the ethical harms of emergent technology ramp up quite sharply because over the next 10 to 20 years, we’re going to be demanding, as an industry, we’re going to be demanding a huge amount of trust from our users. We’ll ask them to trust us with the safety of their vehicles and the homes and even their families. And I don’t think we’ve yet earned the trust that we’re going to request. So my focus is trying to illuminate some of the potential ethical challenges within that territory within that emerging fields. But then to interlace that with what we already know about ethics.
I think the tech industry has this sometimes useful, but often infuriating belief that we’re the first people on any new shore. That we are beta testing this unique future. And therefore, we have to solve things from first principles. But of course, ethics as a field of inquiry has been around for a couple of millennia. Even the philosophy of technology, science and technology studies, these fields have been around for decades. And the industry really hasn’t paid them the attention that perhaps it should.
So I see my job as trying to introduce some of the maybe theoretical ideas, but introducing them in a way that’s practical to designers and product managers and technologists, so they can actually start to have those discussions and make those changes within their own companies. So I’m trying to, if you like, translate between those those two worlds. So if I have to say there’s a particular focus of the book, it’s that.
But I have structured the work in a way that it also is sort of somewhat chronological, working from the most readily apparent harms, such as, as I mentioned before data and digital redlining as it’s known, bias, things like that through to perhaps some of the larger, but further away threats such as the risks to the economy, the risks of autonomous war, and so on. Those sorts of things tend to appear later chapters partly because I decided you need to build upon some of the knowledge we introduced earlier in the book to get to that point.
You know, pivoting to capitalism. So capitalism is under increased scrutiny and critique in ways that overlap with issues of technology and of course, ethics. A specific example recently is how the ad-funded business model’s being blamed for ethical lapses. And Cennydd, I know you have a different take on this. I’d love to hear about it.
I think tracking or targeting, that’s really where the ethical risk lies. Now, advertising can be seen as a promise, you know, a value exchange that we agree to. You know, I get some valuable technology and in exchange, I give up, you know, my attention. I expect, I believe that I’m going to see some adverts on my device, or in my podcast, or whatever it might be. I think if we reject that outright as a business model, which some people do, then really the only business model that leaves us is the consumer-funded technology model. And that has a lot going for it. But it is also potentially highly discriminatory.
One of the great things the advertising model has brought us is that it’s put technology in the hands of billions for free. And I don’t want us to lose that. I think it would be a deeply regressive step to conclude that the only ethical technology is that which is funded by the end user because, of course, then you’re excluding the poor, developing nations, those without credit, and so on. So I would hate for us to throw the baby out with the bathwater.
I do think, as I say though, we have to think more carefully about tracking. And tracking definitely does have some ethical challenges. Sometimes people make the inference then. They say “Well, okay, but the tracking comes from the needs to advertise. You know, you have to track people so you can advertise more accurately to them and get better return for that.”
My counter to that is the value of tracking has now gone beyond the advertising case. Everyone sees value in tracking. So tracking helps any company, whether it’s ad-funded or not, helps us generate analytics about the success of our product, see what’s working or what isn’t in the market. And also, it’s particularly useful for generating training data. We want to understand user behavior so that we can train machine learning systems, AI systems upon that data to create new products and services.
So tracking now has value to pretty much any company, regardless of the funding model. So this cliche of if you’re not paying for the product, you are the product being sold, I would take to even perhaps a more slightly dystopian perspective and say you are always the product. It doesn’t matter who’s paying for it. And so, we’re trying to make a change that isn’t focusing, I think, on the right issues, which is how do we combat some of these ideologies of datafication, of over quantification, and the exploitations that might lurk within that. I think that’s where the real ethical focus needs to go, rather than on the advertising case itself.
But on the other, you know, I remember feeling true, shocking outrage when there was a concept video for a Boston Dynamics robot that was shaped like an animal. This was maybe three years ago, and they had the engineers in this concept video beating it up, pushing it down, doing things that I would consider inhumane. And they were doing it to this robot, and I was upset at them and made sort of character judgments about the company and the people participating in the video based on those behaviors, sort of surprisingly so perhaps. Robot rights. Talk a little about that.
You can look at something like Sophia, this robot that you’ve almost certainly seen. It’s this kind of rubber-faced, it’s a marionette essentially. It’s a puppet. It has almost no real robotic qualities or AI qualities. But it’s now been given citizenship of the Kingdom of Saudi Arabia. Some people pointed out that that actually afford it certain rights that women in that nation didn’t have.
And things like that frustrate me because that thing should absolutely not to have any rights. It has nothing approaching what we might call consciousness. And consciousness is probably the point at which these issues really start to come to the fore. At some point, we might have a machine that has something approaching consciousness. And if that happens, then yes, maybe we do have to give this thing some legal personhood, or even moral personhood, which would then maybe suggest certain rights. You know, we have the Declaration of Human Rights, maybe a lot of those would have to apply, maybe with some modification in that situation.
So we have, for instance, rights against ownership of persons. If we get to a point where a machine has demonstrated sufficient levels of consciousness or something comparable that we say it deserves personhood, then we can’t own those things anymore. That’s called slavery. We have directives against that kind of thing. We probably have to consider can we actually make this thing do the work that we built, essentially, this future of robotics on? Maybe suddenly it has to have a say and opportunity to say “I won’t do that work.”
Now, it’s tempting to say the way around this is well, we just won’t make machines that have any kind of consciousness, right? We won’t program in consciousness subroutines. But as a friend of mine, who’s a philosopher and a science and technology studies academic, called Damien Williams, and he makes a very good point that consciousness may emerge accidentally. It may not be something that we simply excise and say “Well, we won’t put that particular module into the system.” It may be emergent. It may be very hard for us to recognize because that consciousness is probably going to manifest in a different manner to human or animal consciousness.
So there’s a great risk that we actually start infringing upon what might be rights of that entity without even realizing that this is happening. So it’s a really thorny and controversial topic, and one that I’m very glad there are proper credentialed philosophers looking at. I’ve done obviously plenty of research into this, but they’re far ahead of me, and I’m very glad that folks are working on it.
Just with respect to your point about the big dog, I think it was the Boston Dynamics robot. Yes, I mean, that’s fascinating and I think there is … Maybe I have a view that’s a bit more sentimental than most. Some people would say, well, it’s fine. It’s not sentient. It’s not conscious. It’s not actually suffering in any way. But I think it’s still a a mistake to maltreat advanced robots like that. Even things like Alexa or Siri. I think it feels morally correct to me to at least be somewhat polite to them and to not swear at them and harass them. At some point, they’ll be some hybrid entity anyway, they’ll be some centering where these things are combined with humans, some intelligence combination there. And if you insult one, you’ll insult the other. So that feels like something we shouldn’t do.
But I also think we should treat these things so that we don’t brutalize ourselves, if you see what I mean. I think if we start to desensitize ourselves to doing harm to other entities, be they robots or be they animals, whatever it is, that line maybe between artificial and a real life may start to blur. But I think if we start to desensitize ourselves to that, if we lose the violence of violence, then I think that starts to say worrying things about our society. I would say not everyone agrees with that. Perhaps that’s my sentimental view on that topic.
Now back in Bentham’s time, that was the view that he was challenging. Back in the 1700s, that didn’t really seem to be accepted that animals could suffer in the same way. But clearly, they exhibit preferences for certain states, certain behaviors, certain treatments, and you could argue that suffering results from acting against those preferences.
You’re absolutely right to point out a fierce contradiction in a lot of ethics in the way we think about how we want to treat these emerging artificial intelligences and the way that we already treat living sentient species, such as animals. And I think anyone who’s interested in this area owes it to themself to consider their views on say animal ethics, and whether actually that’s an industry that they feel able to support.
Now, that’s not an easy decision to take, and I’m not saying, for instance, that anyone who claims to be interested in robot ethics by logical extension has to become a vegan, for instance. But we owe it to ourselves to recognize as you point out, there are significant contradictions in those mentalities. And we have to try to find a way to resolve those.
You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM, and Google Play. And if you’d like to follow us outside of the show, you can follow me on twitter at @jonfollett. That’s J-O-N F-O-L-L-E-T-T. And of course, the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at GoInvo.com. That’s G-O-I-N-V-O.com. Dirk?