Jon:
Welcome to Episode 287 of The Digital Life, a show about our insights into the future of design and technology. I’m your host, Jon Follett, and with me is founder and co-host, Dirk Knemeyer.
Dirk:
Jon:
This week, we’ll be talking with author and designer Cennydd Bowles about ethics and emerging technologies. Cennydd’s new book Future Ethics, published in September, is available now in print and digital formats. Cennydd, welcome to the show.
Cennydd:
Hi folks. Thanks very much for having me here.
Jon:
So Dirk, do you want to kick us off with some of the questions that we’ve prepared for Cennydd?
Dirk:
Sure, so Cennydd, you know, for starters. Just tell us a little bit about yourself.
Cennydd:
Sure thing. So I call myself a designer and a tech ethics consultants these days. I’d hesitate to call myself an ethicist because I believe that title should probably go to the people who’ve got the credentials to do so.
But my background is as a digital product designer, I’ve worked in government, startups dot-coms. I spent three years heading up design at Twitter UK. And since then, I have focused pretty much exclusively on the ethics of technology and the ethics of design. And as you mentioned at the start, recently released a book about this sort of combination of my work in that field. And I’m now trying to see how I take that to the world. And how I help companies make better ethical decisions, and avoid some of the harms that have sadly become all too apparent, I think, in our field.
Dirk:
So to help me understand sort of the big picture of this, what is your conceptual model for ethics and technology? You know, I can think of sort of broad topics like agency or accountability, but do you have a framework of things that you think are sort of central and important that work together?
Cennydd:
To an extent, I sort of resist the idea of a grand narrative around something like ethics. I think we’ve sometimes looked for overly simplistic framings of that problem. And I see sometimes the solutions we try to offer are a little bit checklist-y. And I think there’s a danger, we get too much into that mentality.
So I think there are some focal points within ethics that are understandable, that may be too narrow. So we see a lot of people within this field, say looking at the ethics of attention, and you know, all this panic about addictive technologies and devices that are consuming all our free time. Now, that’s an important issue. But it’s not the only issue. There are plenty of other ethical issues.
So I’m keen not to be too boxed into a specific section, if you like, a specific problem, or indeed a specific approach. For me, it’s really about challenging these ideologies and the assumptions that have for too long gone unchecked, I suppose, in our field. And entering into a proper discussion about how we change things for the better. I don’t think we’re at the stage yet where we can simply just take an ethical design process and imprint it upon technology teams. I don’t think we have that level of maturity in the discussion yet. So it’s my job, hopefully, to stimulate some of that conversation.
Dirk:
You mentioned you stay away from grand narratives because they often have overly simplified solutions. Can you give us an example of one of those sort of what is an overly simplified solution? And why so that our listeners can sort of have context for why perhaps those grand narratives aren’t as compelling or interesting as we might think they are?
Cennydd:
Yeah, sure. One of the things I see a lot of people reaching for is the oversimplified answer of why don’t we just have a code of ethics for our field? Why don’t we have some Hippocratic Oath for technology or for design? And it’s such an obvious answer, frankly, that it’s been trying dozens and dozens of times. And it hasn’t worked.
And so, when I see another one of these being projected, I try to view it charitably, but I don’t think it’s going to really change anything. If a previous 50 didn’t work, what use is another one going to be? I think there is a danger with approaches like codes of ethics and the like that we get this checklist approach. That we almost end up with ethics becoming sort of what’s happened with accessibility.
Accessibility on the web, you know, since the release of the WCAG guidelines, they’ve helped and they’ve hindered. They’ve helped raise the profile of the issue, but they’ve also made accessibility appear to be a downstream development issue. You know, tick some boxes at the end, you know, check your contrast ratio is your … Now double AA compliant, job done, accessibility finished, let’s move on.
And I don’t think that would be beneficial to have ethics as a checklist exercise at the end of the existing design process, the existing product development process, because it’s that process itself that we need to examine, rather than just tack on a code at the end and say “Well, did we comply with everything that we said we were going to?”
So I can understand the impulse to do that kind of thing. And there may still be a place for some kind of codification, but we’ve got to have those hard conversations first, rather than just throw that up as a one size fits all answer.
Dirk:
That makes a lot of sense. You know, stretching the demystification into a different direction. Yet mainstream conversations about ethics, and particularly ethics and artificial intelligence, are often centered around sort of science fiction type topics. You know, machines that are smarter than or even from an evolutionary standpoint, replacing humans.
Very entertaining, perhaps, but not necessarily grappling with the real ethical issues that matter now or in the future. As someone who spends a lot of time thinking about these things, what are the ethical issues that really should matter to us today and going forward?
Cennydd:
I mean, the issues you mentioned around some of that scary sci-fi future stuff, they are legitimate issues. They’re important ethical issues for the tech industry to grapple with. There is a risk that we over index on those and ignore some of the things that are staring us in the face. But I don’t want to say that we shouldn’t focus on the dystopian angles as well. I think we need to pull every single lever in front of us and explore the ethics of those.
But on a more, I suppose you’d say, a more proximate scale, things that are more readily apparent harms that are happening right now, we obviously have a lot of harms around use of data and the effects of algorithms, often opaque algorithms. You know, the classic black box complaint that goes with a lot of say machine learning systems that we don’t know why they take the decisions that they do.
And I fairly familiar with the idea that they replicate the biases within not just the teams that create them, but also the societies that creates the historic data that feeds and trains these algorithms. So they can essentially exacerbate and concretize these existing biases in ways that look objective and ways that look completely neutral.
I find particularly interested in the effects of persuasive systems, persuasive algorithms. Karen Young, who’s a legal scholar here in London, talks about the advent of an era of hyper nudge, taking the idea of nudging systems to the extreme, where they’re networked and dynamic and highly personalized. And they could be irresistible manipulators. And we won’t know essentially the presence of these systems until it’s too late.
We’ve started already to see, of course, in the political sphere, the power of bots and of human networks of trolls working in collaboration to try and change mindsets. What if we put that kind of persuasive power and dialed it up and amplify its capabilities and put it in the hands of more and more people, that could have phenomenally challenging implications for society and even for free will.
I am also interested in how technology can be weaponized. And I mean that in two senses. I mean it in terms of how it can be misused by bad actors. So of course, hackers, trolls, et cetera. And to an extent, some governments are now using technology as a means of force to compel certain behaviors, or to take advantage of weaknesses and systems to their own advantage and to the disadvantage of others.
And then, of course, there is, I suppose, what you’d call more visible and above the line weaponization of technology, which is still fraught with ethical difficulties. We look at what’s happened say in Google with their project Maven program, which caused all sorts of internal friction. And then, I think it was yesterday that Microsoft announced that they had just won a large defense contract to provide HoloLens technology to the US Army.
And so, the weaponization of these technologies may not have been intended. We may be playing with things that we think have fascinating implications. And we want to see where that technology takes us. And then we find later, oh, actually this could be used for significant harm, but we didn’t plan for it, or we didn’t have an opportunity for the people working on that technology to object and say “Well, I’m not actually comfortable working on a military project, for instance.”
So it’s all these unintended consequences of technologies and the externalities of technologies that fall on people that we just didn’t consider. I think that’s where some of the more pressing and slightly less far fetched perhaps ethical challenges lie.
Dirk:
For sure, those are really interesting and important examples. As I’m thinking about ethics in application, or how to get ethics properly considered in the context of the companies or countries, organizations that are making decisions now that have real ethical implications, what would or should that look like? You know, the notion of an ethicist or an ethical consultant such as yourself participating in a product development process or participating in a company.
There’s not a wide precedent for it. I’m sure it’s happened, but there certainly isn’t a standard that I’m familiar with, and I would suspect most people are. I mean, is this a function that should be like a lawyer? You know, that’s generally sort of an outsider, specialized thing that’s coming in, in expert situations? Or is it more like a designer-researcher that’s sort of part of a team on an ongoing basis? How do we structurally make ethics the appropriate part of the things that we’re doing in our organizations?
Cennydd:
Yeah, that’s an astute question because, as you say, there isn’t a whole lot of precedent for this. The closest analogies we can take a probably in academia or in medicine and so on where we have institutional review boards, IRBs, which are essentially ethics committees, right? And any large study or any large program will then have to go through approval at the IRB level.
So some people think, well, maybe that’s a model that we take, and we transfer to large tech companies. I’m not entirely convinced. There maybe some cases in which that works. But I think tech industry ideologies are just so resistant to anything that looks like a committee. That anything that feels like academia and the sort of heavy burdensome processes.
So I think, in reality, we have to tread more likely to begin with, unless there are really significant harms that could result. I’d say, if you’re working on weapon systems, you probably need an IRB, right? You need a proper committee to validate the decisions, the ethical choices in front of you. But for every day tech work, I think there is certainly benefit in having, yep, legal on board. You know, there will absolutely be lots of lawyers, general counsel, and so on, who have an interest in this, in both senses of that word.
But most of the change really has to come, I think, from inside the company. Now, I may be able to … And we’ll find out whether this true, I may be able to stimulate some of that and to help guide those companies. But ultimately, I think a failure state for ethics is to appoint a single person as the ethical Oracle. And say “Well, let’s get this person in, then they give their binding view on whether this is a moral act or not.” It doesn’t scale. And it also could be quite a technocratic way of tackling what should be more of a democratic, more of a public-orientated decision.
So I think we have to find a way to approach ethics as an ethos, a mindset that we bring to the whole design process, the whole product development process, so that it raises questions throughout our work, rather than, as I say, just a checklist at the end or a legal compliance issue.
As for the structures of that specifically, like do we need an onsite ethicist within the team? Or do we train designers in this, I think designers make for good vectors for this kind of work. I think they’re very attuned to the idea of the end user having certain sorts of rights, for example. But I am only just begun getting to see the patterns that different companies are trying.
And what I’m seeing at the moment is there is very little in common. You have some companies setting up entire teams. You have some people leading it, some companies leading it from product. You have some companies getting it from design, some trying to hire ethicists out of university faculties. And I don’t yet have the data to know which of those approaches works. I’m glad they’re trying all these approaches because hopefully in a year, we’ll have a better idea of which of those have been the most successful.
Dirk:
That makes sense. What’s your approach? I mean, what’s your, as a consultant, you must have an engagement model. What is the sort of prototype that you’re trying out as you work with companies?
Cennydd:
You know, I’m literally working on that right now. So I don’t have a specific answer. My hunch at this stage is some initial engagement probably, you know, a talk, workshop, something like that, is that an awareness raising thing, but I don’t believe that’s a successful model for long term change. I think that has to be the initial engagements, like a foot in the door.
But my hunch is it’s going to be much more meaningful to have some kind of, you know, like a retainer relationship, or something where someone like myself can come in and start off some initiatives, and then equip the team with some of the skills they need to make those changes. But then come in and check for progress. Because I can tell you from experience that pushing for ethical change is difficult work. You’re swimming against a very heavy tide a lot of the time.
So you have to have persistence. You can’t be too dissuaded if your grand plans don’t work. So I think a kind of longitudinal interaction, maybe over the course of three, six, 12 months is where I’m trying to head. For me, there’s obviously, you know, I’ve got to position that appropriately and convince people that there’s value in that. But, you know, ethics is for life, not just the Christmas, all these sorts of things. I don’t want to have a situation in 12-18 months where we’re saying “Oh, we’re still talking about that ethics thing?” It has to be a bit more drawn into the way that we approach these problems.
Dirk:
Talk a little bit more about your expertise. You’ve just written this book, and it’s getting amazing reviews. People really are liking it, are seeing incredible value in it. Maybe share with our listeners in more detail, what’s going on in the book? What’s it all about?
Cennydd:
Sure thing. So my focus specifically has been on the ethics of emerging technology. And that’s not to say that there aren’t significant ethical questions to be asked around contemporary technology. But it’s a bit of a fait accompli. There is value in talking about, say, the ethics of news feed and Facebook. But right now, there’s not a whole lot we can do. Its effects have been felt when you look at, say, the effects that Facebook and Twitter may have had on major elections of 2016, we can try to mitigate those homes from happening again. But really, that horse has bolted if I can throw the cliches in.
And for me, the ethical harms of emergent technology ramp up quite sharply because over the next 10 to 20 years, we’re going to be demanding, as an industry, we’re going to be demanding a huge amount of trust from our users. We’ll ask them to trust us with the safety of their vehicles and the homes and even their families. And I don’t think we’ve yet earned the trust that we’re going to request. So my focus is trying to illuminate some of the potential ethical challenges within that territory within that emerging fields. But then to interlace that with what we already know about ethics.
I think the tech industry has this sometimes useful, but often infuriating belief that we’re the first people on any new shore. That we are beta testing this unique future. And therefore, we have to solve things from first principles. But of course, ethics as a field of inquiry has been around for a couple of millennia. Even the philosophy of technology, science and technology studies, these fields have been around for decades. And the industry really hasn’t paid them the attention that perhaps it should.
So I see my job as trying to introduce some of the maybe theoretical ideas, but introducing them in a way that’s practical to designers and product managers and technologists, so they can actually start to have those discussions and make those changes within their own companies. So I’m trying to, if you like, translate between those those two worlds. So if I have to say there’s a particular focus of the book, it’s that.
But I have structured the work in a way that it also is sort of somewhat chronological, working from the most readily apparent harms, such as, as I mentioned before data and digital redlining as it’s known, bias, things like that through to perhaps some of the larger, but further away threats such as the risks to the economy, the risks of autonomous war, and so on. Those sorts of things tend to appear later chapters partly because I decided you need to build upon some of the knowledge we introduced earlier in the book to get to that point.
Dirk:
I loved that the book is really practically focused. So I do hope that our listeners seek out Future Ethics because it will really, you know, give you sort of a steroid shot into understanding the space. And then also having practical stuff you can act upon. It’s really good.
You know, pivoting to capitalism. So capitalism is under increased scrutiny and critique in ways that overlap with issues of technology and of course, ethics. A specific example recently is how the ad-funded business model’s being blamed for ethical lapses. And Cennydd, I know you have a different take on this. I’d love to hear about it.
Cennydd:
Sure. I’m sometimes a little bit unpopular in their tech ethic circles because my response to that challenge is different than the sort of pre-ordained view these days. I don’t believe advertising is the problem. I have to make a fairly, what might seem like a pedantic distinction here. But I think it’s actually important one to make, which is a separate advertising from tracking.
I think tracking or targeting, that’s really where the ethical risk lies. Now, advertising can be seen as a promise, you know, a value exchange that we agree to. You know, I get some valuable technology and in exchange, I give up, you know, my attention. I expect, I believe that I’m going to see some adverts on my device, or in my podcast, or whatever it might be. I think if we reject that outright as a business model, which some people do, then really the only business model that leaves us is the consumer-funded technology model. And that has a lot going for it. But it is also potentially highly discriminatory.
One of the great things the advertising model has brought us is that it’s put technology in the hands of billions for free. And I don’t want us to lose that. I think it would be a deeply regressive step to conclude that the only ethical technology is that which is funded by the end user because, of course, then you’re excluding the poor, developing nations, those without credit, and so on. So I would hate for us to throw the baby out with the bathwater.
I do think, as I say though, we have to think more carefully about tracking. And tracking definitely does have some ethical challenges. Sometimes people make the inference then. They say “Well, okay, but the tracking comes from the needs to advertise. You know, you have to track people so you can advertise more accurately to them and get better return for that.”
My counter to that is the value of tracking has now gone beyond the advertising case. Everyone sees value in tracking. So tracking helps any company, whether it’s ad-funded or not, helps us generate analytics about the success of our product, see what’s working or what isn’t in the market. And also, it’s particularly useful for generating training data. We want to understand user behavior so that we can train machine learning systems, AI systems upon that data to create new products and services.
So tracking now has value to pretty much any company, regardless of the funding model. So this cliche of if you’re not paying for the product, you are the product being sold, I would take to even perhaps a more slightly dystopian perspective and say you are always the product. It doesn’t matter who’s paying for it. And so, we’re trying to make a change that isn’t focusing, I think, on the right issues, which is how do we combat some of these ideologies of datafication, of over quantification, and the exploitations that might lurk within that. I think that’s where the real ethical focus needs to go, rather than on the advertising case itself.
Dirk:
That makes a lot of sense. You know, another ethical topic and to sort of wrap up the interview is getting back maybe more into the science fiction realm and the notion of robot rights. So on one hand, modern robots appears little more than a complicated bucket of bolts.
But on the other, you know, I remember feeling true, shocking outrage when there was a concept video for a Boston Dynamics robot that was shaped like an animal. This was maybe three years ago, and they had the engineers in this concept video beating it up, pushing it down, doing things that I would consider inhumane. And they were doing it to this robot, and I was upset at them and made sort of character judgments about the company and the people participating in the video based on those behaviors, sort of surprisingly so perhaps. Robot rights. Talk a little about that.
Cennydd:
Sure thing. So this is a complex and pretty controversial topic. There are many tech ethicists, AI ethicists, particularly, who would say robots cannot and never should have rights. Rights get quite slippery in ethics. It’s quite easy sometimes to claim rights without justification, which is a reason that some ethicists prefer not to use that perspective.
You can look at something like Sophia, this robot that you’ve almost certainly seen. It’s this kind of rubber-faced, it’s a marionette essentially. It’s a puppet. It has almost no real robotic qualities or AI qualities. But it’s now been given citizenship of the Kingdom of Saudi Arabia. Some people pointed out that that actually afford it certain rights that women in that nation didn’t have.
And things like that frustrate me because that thing should absolutely not to have any rights. It has nothing approaching what we might call consciousness. And consciousness is probably the point at which these issues really start to come to the fore. At some point, we might have a machine that has something approaching consciousness. And if that happens, then yes, maybe we do have to give this thing some legal personhood, or even moral personhood, which would then maybe suggest certain rights. You know, we have the Declaration of Human Rights, maybe a lot of those would have to apply, maybe with some modification in that situation.
So we have, for instance, rights against ownership of persons. If we get to a point where a machine has demonstrated sufficient levels of consciousness or something comparable that we say it deserves personhood, then we can’t own those things anymore. That’s called slavery. We have directives against that kind of thing. We probably have to consider can we actually make this thing do the work that we built, essentially, this future of robotics on? Maybe suddenly it has to have a say and opportunity to say “I won’t do that work.”
Now, it’s tempting to say the way around this is well, we just won’t make machines that have any kind of consciousness, right? We won’t program in consciousness subroutines. But as a friend of mine, who’s a philosopher and a science and technology studies academic, called Damien Williams, and he makes a very good point that consciousness may emerge accidentally. It may not be something that we simply excise and say “Well, we won’t put that particular module into the system.” It may be emergent. It may be very hard for us to recognize because that consciousness is probably going to manifest in a different manner to human or animal consciousness.
So there’s a great risk that we actually start infringing upon what might be rights of that entity without even realizing that this is happening. So it’s a really thorny and controversial topic, and one that I’m very glad there are proper credentialed philosophers looking at. I’ve done obviously plenty of research into this, but they’re far ahead of me, and I’m very glad that folks are working on it.
Just with respect to your point about the big dog, I think it was the Boston Dynamics robot. Yes, I mean, that’s fascinating and I think there is … Maybe I have a view that’s a bit more sentimental than most. Some people would say, well, it’s fine. It’s not sentient. It’s not conscious. It’s not actually suffering in any way. But I think it’s still a a mistake to maltreat advanced robots like that. Even things like Alexa or Siri. I think it feels morally correct to me to at least be somewhat polite to them and to not swear at them and harass them. At some point, they’ll be some hybrid entity anyway, they’ll be some centering where these things are combined with humans, some intelligence combination there. And if you insult one, you’ll insult the other. So that feels like something we shouldn’t do.
But I also think we should treat these things so that we don’t brutalize ourselves, if you see what I mean. I think if we start to desensitize ourselves to doing harm to other entities, be they robots or be they animals, whatever it is, that line maybe between artificial and a real life may start to blur. But I think if we start to desensitize ourselves to that, if we lose the violence of violence, then I think that starts to say worrying things about our society. I would say not everyone agrees with that. Perhaps that’s my sentimental view on that topic.
Dirk:
No, that makes a lot of sense. And it just as a follow up, it seems as though people who are talking about robot rights and participating in the conversations around consciousness of robots, and making sure that they’re protected and we’re safe. This was happening while we take other species such as cows, for example, and slaughter them by the millions or 10s of millions. I don’t know what the scale is, but it’s horrifying. What do you think about the boundaries there? I mean, a robot versus a cow or some other non-human animal?
Cennydd:
I’m casting my mind back to remember who it was. I think it was Jeremy Bentham, the utilitarian philosopher said, and I forgive me, I’ll have to slightly paraphrase. But it’s the question is not can they talk or can I think, but can they suffer. And certainly, animals are absolutely capable of suffering.
Now back in Bentham’s time, that was the view that he was challenging. Back in the 1700s, that didn’t really seem to be accepted that animals could suffer in the same way. But clearly, they exhibit preferences for certain states, certain behaviors, certain treatments, and you could argue that suffering results from acting against those preferences.
You’re absolutely right to point out a fierce contradiction in a lot of ethics in the way we think about how we want to treat these emerging artificial intelligences and the way that we already treat living sentient species, such as animals. And I think anyone who’s interested in this area owes it to themself to consider their views on say animal ethics, and whether actually that’s an industry that they feel able to support.
Now, that’s not an easy decision to take, and I’m not saying, for instance, that anyone who claims to be interested in robot ethics by logical extension has to become a vegan, for instance. But we owe it to ourselves to recognize as you point out, there are significant contradictions in those mentalities. And we have to try to find a way to resolve those.
Dirk:
It’s very eloquently put. Thanks so much, Cennydd.
Cennydd:
Jon:
Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to thedigitalife.com. That’s just one L in The Digital Life. And go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone. So it’s a rich information resource to take advantage of while you’re listening, or afterward if you’re trying to remember something that you liked.
You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM, and Google Play. And if you’d like to follow us outside of the show, you can follow me on twitter at @jonfollett. That’s J-O-N F-O-L-L-E-T-T. And of course, the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at GoInvo.com. That’s G-O-I-N-V-O.com. Dirk?
Dirk:
You can follow me on twitter at @dknemeyer. That’s at D-K-N-E-M-E-Y-E-R and thanks so much for listening. Cennydd, how about you?
Cennydd:
Gosh, well, if anyone would like to follow me and my exploits on Twitter, I’m at Cennydd there, which is spelled the Welsh way. So C-E-N-N-Y-D-D, and of course, I’ll be thrilled if you were to buy my book, Future Ethics. You can find information of that at www.future-ethics.com. Thanks.
Jon:
So that’s it for Episode 287 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.