
Sign up to save your podcasts
Or
More information on Susie Alegre:
Books: https://linktr.ee/susiealegre
Website: www.susiealegre.com
Consulting website: https://alegre.ai
TRANSCRIPT:
Kel: [00:00:00] I'm Kel Myers, and this is Phoenix Sound. Joining me in conversation today is Susie Alegre, international human rights lawyer and author of the new book, Human Rights, Robot Wrongs, Being Human in the Age of AI, in which she explores the ways artificial intelligence is starting to shape every aspect of our daily lives, from how we think to who we love.
As the atmosphere of fear and hysteria around AI grows, it's apparent we need more nuanced and well-informed discussions around this issue. Today, we'll explore what human rights are, the potential threat AI poses to our human rights, and some of the proactive measures we can take to ensure these rights are protected.
Stay with us.
So thanks again, Susie I'll just give you a background on how I first discovered you. I first came across your work in 2022, when you published your first book, Freedom to Think, in which you chart the history and importance of freedom of thought and how that basic human right is something that needs protecting and it just completely opened my mind up and gave language to a lot of things I think I've been feeling.
And I heard a comedian say lately that he's got a lot of vibes – he just doesn't have data to back it up. And I was kind of like, in that space, I've got all these vibes, like, something doesn't feel right, but that definitely helped to articulate what the problem was there.
And your work called into question, I think, our growing over-reliance on technology and how it can really just compromise our ability to think for ourselves and then, you know, fast forward two years and we find ourselves living through, an era of rapid AI advancement.
It's moving at a breakneck speed.
You know, people are getting AI partners, AI pets, chat GPTs, got hundreds of millions of active users.
And I think, as our collective enchantment around the potential of these technologies grows, so does, obviously, the need to know where we stand as humanity in relation to them, which is why I think the central question at the heart of your second book, Human Rights, Robot Wrongs, is such an important and also a refreshing reality check.
In the introduction you write, 'The question I ask in this book is not, What is AI and how can we constrain it? The question is, what is humanity, and what do we need to do to protect it?'
So, let's start there, with us, people. What are human rights, and why do they need protecting?
Susie: Well, human rights are often sort of talked about as if they were some nebulous idea, but human rights are a set of Rights and freedoms that are now set down in law.
So since 1948, with the Universal Declaration on Human Rights, sort of in the aftermath of the Second World War, people and countries from around the world came together to discuss what they could do to make sure that these kind of horrors never ever happened again and to write, if you like, a list of all the rights and freedoms that we need to enjoy our humanity and to flourish as human beings, regardless of who we are or where we are on the planet.
And so, the Universal Declaration on Human Rights was really the first time that those rights had been codified clearly in international law. I mean, historically, we'd seen things like the Declaration on the Rights of Man, you know, through the Enlightenment, where people had started to realise that there must be these rights that we need to be human.
But since the Universal Declaration on Human Rights, we've seen laws both like, the International Covenant on Civil and Political Rights, the European Convention on Human Rights, and in the UK, now the Human Rights Act, coming into force and really putting legal guarantees for these rights.
And the kind of rights, we see in these, in these documents and in these laws are quite wide ranging. So they include things like the right to private and family life, the right to liberty, the right to freedom of thought, as you mentioned, which I explored a lot in my first book, and also rights related to things like freedom of association and how [00:04:00] we interact with each other, freedom of expression, access to justice.
Another crucial one, which sort of almost seems like it's a given, but is the right to life, but the right to life is enshrined in and protected by law and by many laws in different ways. And so those human rights are really what we need to allow us to live, to develop, and to flourish as humans, as individuals, and as societies.
Kel: At the start of the book you explain how AI is very much a gender based issue and it is quite scary to read and to see how that's moved, you know, like over a millennia from myth into something that's happening in reality. Would you be open to reading us the opening section of chapter one Being Human?
Would that be alright?
Susie: Yes, absolutely, I'd be delighted.
In his 1972 novel, The Stepford [00:05:00] Wives, Ira Levin created a dystopian world in which a town full of men, led by Diz, a former Disneyland roboticist, replaced their wives with robots. It was a tale situated within a brewing backlash against the women's liberation movement of the 1960s, but it built upon a cultural phenomenon dating back millennia, the fantasy of replacing women with automata.
In ancient Cypriot mythology, King Pygmalion was so repulsed by real women, he decided to create a perfect female sculpture, Galatea, to love instead.
The goddess Aphrodite helpfully breathes life into the marble so that the king and his sculpture could start a family and live happily ever after.
The Stepford Wives is a modern day retelling of the myth, and the 2004 film version places it firmly in the world in which we live today, with Mike, a former Microsoft executive, Taking the lead, and [00:06:00] smart houses and a robo puppy, completing the perfect suburban picture created by the robot wives.
It is a toxic cocktail of idealised womanhood, misogyny and automation. And it is a phenomenon that has crossed over from myth and fiction into the reality of tech innovation that we live with every day. Described by researchers from Radboud University Nijmegen as Pygmalion Displacement, A process of humanising AI that dehumanises women in particular.
Once you start to look at technology through the Pygmalion lens, you will see it is all around you. Just ask Alexa.
Kel: Thanks, Susie.
I was pretty much shaking my head the entire time that you were reading that.
Yeah, thank, thank you. It's very, very powerful opening there to the book and Thank you. Yeah, yeah, it's incredible.
And I've been reading out to all my friends. And it's not really a case of whether or [00:07:00] not they want to, so maybe that's me impeding on their human rights, I don't know, but you need to hear this, you're gonna hear it.
Susie: As they can walk away, it's fine.
Kel: Yeah, yeah
in the book, and just prior, you talked about the UDHR, or the Universal Declaration of Human Rights, as this global blueprint for societies and the world and how we should function with respect to human dignity and justice.
Could you please explain to our audience what the UDHR, is more broadly and how it serves to protect us, especially at this, at this time when we've got so many AI powered threats in the world.
Susie: Well, as I said earlier, the Universal Declaration on Human Rights, or UDHR, is the kind of foundational document of modern human rights that was agreed in 1948, in the aftermath of the Second World War Eleanor Roosevelt being the chair of the drafting committee, which included [00:08:00] members from all around the world, from Russia, from China, from Lebanon, from Canada, the UK and Latin America, as well as Eleanor Roosevelt, obviously from the United States.
It's a document that includes not only civil and political rights, the kind of rights you might think about when you're thinking of human rights, like the right to liberty, prohibition on torture, prohibition on slavery, but it also includes a lot of what are called economic, social and cultural rights.
So things that are very much about things like the right to work, the right to rest, the right to education, and the right to health, crucially.
So it's a really quite a broad document, including the full range of human rights, and recognising that human rights operate between each other, if you like.
So if you don't have equality, then you may well not have the right to health or you may not be enjoying the right to health if you don't have [00:09:00] equal access, for example, to health care facilities. And so it's really this idea that if we want to flourish as human beings, then we need to make sure that these rights are guaranteed going forward.
One of the things that is really important to bear in mind about human rights law and the way that it's developed since the UDHR in 1948 is what's called the Living Tree Doctrine on human rights law, which means that human rights develop as our societies develop.
So we're not stuck with the ideas of human rights that people living in the 1940s might have had, particularly things like gender equality or LGBTQ+ rights have advanced radically since the 1940s.
And human rights law has evolved with those changes in society.
So, for example the place where I'm from, the Isle of Man, in the 1970s, still had corporal punishment for young men as a judicial remedy.
And a young man who was sentenced to be birched in the Isle of Man took his case to the European Court of Human Rights in Strasbourg, claiming that birching was inhuman and degrading punishment.
The Strasbourg court, when it looked at the case, found, and this is what I think is really important to understand about human rights, that while birching or corporal punishment had been pretty common in the 1950s in Europe when the European Convention on Human Rights was drafted, by the 1970s the Isle of Man was a real outlier, and that in that context it had gone from being something which was quite a normal punishment to something that amounted to a degrading punishment and was then unlawful.
And so that really matters when we're thinking about human rights in the digital world. And that you might hear people saying, ‘well, you know, tech is moving so fast that the law can't keep up.’
Well, human rights law evolves to meet the changing society. You don't have to rewrite human rights. You have to interpret it in light of what's happening in society.
And today, very clearly, what's happening in our societies includes the digital and technological advancements and the AI advancements you were talking about at the start.
Kel: Yeah, and that's such an interesting distinction, I think to look at it that way, rather than think like, oh, you know, yeah, we're not keeping up to, to imagine it as this ever-evolving thing, like, like a plant or a tree, and it's kind of living organism, I guess.
Susie: No, absolutely, and I think, again, one of the narratives that you'll find about laws not keeping up, it's often not about the laws keeping up, it's about the fact that you know, justice moves slowly and access to justice is not, unfortunately, a given. And so you'll see that the interpretation of our laws may be moving more slowly.
That doesn't mean that they don't apply.
Kel: Yeah. Absolutely.
In the book, you talk about the environmental impact of digital technologies, and I think that this is often a blind spot, and I can completely understand why when, you just go to the Apple store, or you just order a new phone online, and you're not really considering, like, where is it coming from?
I think a lot of the time it says designed in California or designed in the USA.
Most people I think are aware that it's made in China, but we don't really think about the consequences deeper than that in terms of, of the impacts.
And I was really struck by, chapter eight of your book entitled Magical Pixie Dust, where you write, quote:
'our obsession with technology as an easy way of making our lives smoother is fed by the fallacy that virtual worlds are somehow greener.
The truth is uglier. If we want a sustainable future on earth, we cannot afford to look away.'
What exactly are we looking away from?
Susie: Well, we're looking away from an awful lot of things, so it's almost not looking away, it's burying our heads in the sand, essentially.
You know, one thing is the question of the supply chain.
You know, the tech that we carry around in our pockets is made up of minerals, chemicals, you know, substances that have ultimately been dragged out of the earth, not just magically appeared in the Apple shop or whatever other tech shop you're going to.
And what we can see is that, you know, the ICT industry involves really horrendous examples of illegal or unregulated mining, particularly in the developing world.
So an awful lot of the vital minerals and components of our technology are being dug out of the ground in unregulated mines in places like the Democratic Republic of Congo, using indentured labour, child labour, you know, people working in incredibly dangerous conditions in ways that are really significantly breaching their rights.
And we'll see as well in terms of the manufacturing of the components of our technology.
You know, when you look at even Silicon Valley, for example, Silicon Valley is one of the most polluted areas in the United States, precisely because it's where the silicon was dug up and processed in the United States.
And there have been reports of really serious health impacts of the manufacturing of tech products, including impacts on reproductive health, which can have consequences not only for the people working in the factories, but for their entire families and for generations.
And then we see also supply chain questions when you look at things like artificial intelligence and the training of artificial intelligence, which we're sort of told is a mechanical and again a slightly magical feat.
But actually things like content moderation, for example, to train generative AI to understand what is and is not acceptable if you like, in human language and in human society.
That work is done by using what are known as click workers, often again in the developing world, to look at horrendous tranches of content scraped off the internet in order to filter it out so that the product that we get to see when we look at a tool like ChatGPT is a kind of sanitised version of what's been taken off the internet.
But another and I think really important thing to bear in mind is that when we're done with our technology, you know, when your phone becomes obsolete, that in itself is an environmental disaster, that the scale that we are using and throwing away technology at - all of that stuff, all of that hardware is going back into the earth, potentially either to be buried in toxic landfill or to be recycled again in an unregulated way often with implications for child rights and for reproductive rights of people working in this environment or living around it.
And then one of the things that is just recently, actually, since the book came out, really showing itself is the power intensity and the water usage of AI itself.
And so while we're being sold this idea that AI, particularly generative AI in the last two years, which has really captured the public imagination, is going to turbocharge productivity, we're not really seeing really great examples of how it's improving productivity, but what it is doing is massively increasing the energy use of the companies that are producing it.
And so we've seen this year companies like Google and Microsoft reporting massive spikes in their energy use.
And similarly, the way that generative AI and AI works is through data processing and massive data processing centres, which is scattered around the world. And we're seeing those data processing centres being built often in areas which are already suffering from drought.
So we've seen problems in places like Spain and also in Latin America, where local communities have been protesting against new builds of data centres, which will effectively take all the water in places where people are struggling to have enough drinking water as it is.
So the environmental consequences are all around sort of from start to finish and everything in between. And so, as I say, it's almost like we're burying our heads in the sand if we don't look at this.
And I think that, you know, just looking at the, the energy usage of for example, AI search over a standard search - I think last year there was an estimate that it was about 10 times more energy intensive to just run a search on an AI search engine instead of using a sort of standard Google search engine.
I don't know this year what that is because it seems like the scale and the intensity of the power usage is escalating as the models themselves escalate.
But it is when you think about it, it's a bit like sort of taking Concorde to the corner shop using an AI search to just, you know, find out.
Kel: It's a great way of putting it, yeah.
Susie: You know, find out where your nearest supermarket is or something, you know, it's really, you know, I think we need to think really, really carefully about what it's for, why we need it and what the costs are.
Kel: Right? Yeah, definitely. I was thinking that with regards to, yeah, the, the energy use and I saw something in the Guardian with regards to Ireland this past couple of weeks and their energy. Yeah. So, so just thinking about the fact that you can just kind of like, I don't know, use ChatGPT, like the free version and anyone can do it just to mess about and not even considering the implications.
Susie: Yeah, writing pointless haikus, it's whatever.
Kel: Yeah, exactly. .Please write me a poem like I'm Rumi’ you know,
Susie: Exactly. About the environment.
Kel: Yeah. Yeah. Yeah. And once we like pull that veil back and we can, we can really see what's going on, we start to look at the reality of it - I'll be honest, I think a lot of people feel overwhelmed.
I think when you see like the cobalt mining, I mean, I recommend anyone just go and YouTube that and see for yourself some of the images and the videos of the conditions of these, these people are living and working and it's, it's. disgusting and disgraceful to see and also I think a lot of people, you know, in the West, especially, were just like, ‘Oh, well, oh, poor things. Oh, that's so sad.’
You know, we've got nothing but pity to offer.
And, you know, I don't think that's helpful for us to feel disempowered either, like we can't really do anything.
So I'm wondering if there's any, like, practical ways that you think that we can actually make a difference.
Like, do you think there's anything we can do to participate to make it more sustainable?
Or is it just a case of like, you know, pulling the veil back and, and discerning for yourself how you can do your best?
Susie: I think on an individual level it's always gonna be a challenge, but you know, I mean, one thing you can do is buy less tech, throw less stuff away, or recycle more.
Kel: Yeah.
Susie: Which, I mean, I know that sounds a bit you know, a bit facile, but you know, there are companies that are trying to make a difference.
So, for example, there's one called Fairphone, where they are trying to design a phone, which is, you know, fair in terms of the way it's made and the supply chains as much as possible, and is also repairable.
And designed to last at least 10 years and for, you know, the different components to be swapped out if they're not working.
So, you know, I think there are companies that are going in that direction.
And we're also starting to see legislation, for example, in the EU about environmental and human rights sustainability more generally in consumer products, but which can apply to tech products.
And I think what we can always do as individuals is, is make ourselves aware of what's happening and also, you know, ask our politicians and our lawmakers to protect people and, you know, tell them what we want, the societies we want to live in because ultimately, you know, regulation can help very much to push industries in different directions.
Kel: Yeah, yeah, hopefully. I heard Sam Altman talk in an interview about how he's really big on nuclear, whatever that means, like but yeah, it's interesting to think how, like, they're thinking about energy and, and just the scale of it was, I think it just, yeah, it was quite telling, really.
Susie: one of the things I do think is that at the moment, and, you know, while I was writing the book the really big hype cycle on AI and generative AI in particular was in full swing.
I am starting to sense that that might be changing, you know, when people are, instead of being terrified that they're going to be left behind the curve, people are starting to now ask questions about productivity, about environmental impact about human impact.
And so, you know, it may well be that what we've seen in the last year or two we won't be seeing in the next couple of years.
You know these things can come and go quite fast and it wasn't so long ago that we were all going to be living our lives in the metaverse.
So things can change quite fast.
Kel: Moving on to women's health apps and the risks of AI in healthcare diagnostics - this is something that's close to my heart personally as someone who's got stage four endometriosis and has been seeing the rise of, of these kinds of apps, literally over the past 12 months, I'd say.
They seem to have just kind of like there's been a tsunami of them.
And in your book you talk of the need to look carefully before we rush to incorporate AI into every aspect of our lives, and you know, a lot of these apps are claiming to aid health and wellness or solve what seem to be, as we them as unsolvable problems.
So I'm wondering what you see as being the risks and benefits of using AI powered apps in health diagnostics and care, particularly for diseases like endometriosis and things where there is no cure and there's no known cause, and there's a lot of uncertainty.
Susie: Yeah, no, I mean, I think there are a large amount of risks.
And you know, the health space is one where we're often told that, you know, there are great opportunities for AI to sort of, to solve cancer or, you know, whatever, whatever it is, it's, you know, this great push.
I think one of the big risks is, you know, firstly, AI is lots of different things, you know, AI is not just a chat bot or you know an app on your phone, but those are the things that are the easiest to deploy and to sell and to sell at scale.
One of the things I looked at in my first book in particular was period tracker apps.
And I mean, the monitoring of women's wellness is just massive, massive money through apps and they're are you know, several big problems with that.
You know, there have been some of those apps that have run into problems with privacy, where it's been clear that they didn't hold data securely, so they're gathering really sensitive data that might, and you know, when you look, now at places like the US where you're gathering data that might indicate that somebody has gone for an abortion or was looking for an abortion, the kind of data which could land, land you up in, in jail in some places, but also just sort of intimate information about people's sex lives and about their health, which might be used against them when it's sold on.
And we see that the same in, in mental health apps.
But aside from the sort of the privacy questions and the security questions, I mean, one of the big issues with a lot of these apps is, is whether or not they're actually real. And, you know, false advertising, you know, the term snake oil came from false advertising of snake oil for medicinal purposes and AI and tech based snake oil is all around us.
I mean, one example of an app that the Federal Trade Commission recently found to be unlawful because it was had absolutely no or very, very little evidence to show that it worked, was an app that claimed that it could diagnose sexually transmitted diseases by uploading a dick pic. So effectively, you just take a photo of your prospective partner's penis and it will tell you immediately whether or not they might have an STD.
Unsurprisingly, the Federal Trade Commission found that there was very little evidence to demonstrate that that was in any way true. And that app is no longer available.
But those kind of things, you know, they are everywhere. And you know, it is a real challenge, but going back to that question about tech outstripping the law, you know, the law on false advertising and fraud remains in place.
It might just take a bit longer to uncover or to enforce in these kinds of cases. But I think that's a real problem of sort of tech optimism saying, you know, wow, we've got all this amazing tech, so we're going to be able to solve all of your health problems and all of your mental health problems, physical health problems.
That's not necessarily true.
Kel: Yeah, yeah. And it's all in the wording as well. I came across, my partner is a nurse, and she came across something the other day. She's like, you need to check this out. It's a new website and app in, I think it's coming out of the States. And It's supposedly you can find a cure for endometriosis and it's like, it takes years to get a diagnosis.
We'll get you an answer in days. That's like, that's amazing. That's great. If only guys.
Susie: Just upload a selfie.
Kel: Yeah, yeah, right. That's it. Yeah, that's perfect. Yeah, good. That sounds great. If only, yeah, why didn't we just think of that earlier? Like this woman, she's got it sorted. Yeah, anyway, Yeah, closing out this second to last chapter you write, and I think this is just so poignant, 'at some point we need to ask the big questions. What is the point? Where will it take us? Is it worth the cost? Technology is not inherently bad, but it's not a panacea for all our problems.'
So, yeah, what are the risks of viewing it in such idealistic terms? Because I'll be honest, I think a lot of people are here. That's my feeling in society here in Australia.
I think more so than I'm feeling from my friends in the UK and what they're experiencing. So, yeah, and I'm thinking about the example you share about Babylon Health. The health startup claiming to be a doctor in your pocket app.
Yeah, well Babylon health was, was precisely that. It was a sort of online GP service, but it was, it was something again that was really taken up around the world, including here in the UK with, you know, billions [00:29:00] being pumped into it as this is going to be the future of healthcare because, you know, it means that everyone can sort of see a doctor immediately and get this sort of immediate diagnosis.
But Babylon, you know, despite its success and promise effectively folded. I think it was last year, two years ago. Because ultimately the tech wasn't there. It was all sort of smoke and mirrors. So the information that it was trying to use was sort of information that had been inputted into an Excel sheet by some real doctors in the back office, but that wasn't capable of adjusting to kind of the complexity and the subtlety of real life patients who don't necessarily explain themselves or tell you things in the exact terms that have been inputted sort of into the Excel spreadsheet.
So I think there's a real danger of believing, really wanting to believe that we can solve these problems with technology.
And I mean, another very famous example [00:30:00] was Theranos, you know, claiming that they could do blood tests with a tiny drop of blood from a prick of the finger. You know, huge hype, the future of healthcare, absolutely revolutionary. And you know, it's founders then landed up in jail because ultimately it wasn't true.
The science wasn't there. Just wanting it to be true is not enough to make it, to make it happen. And I think another danger that we can see is where you land up with money being pumped into the technology instead of the people that we need in the healthcare system with this idea that it's going to be cheaper, it's going to revolutionise everything.
It's going to be super efficient.
And you know, I remember last year, I unfortunately had to spend an evening in an emergency room in the UK, staring at a screen, a massive screen on the wall, which read Your license has expired, please contact your administrator. [00:31:00] And you know, this screen was on the wall in A& E lit up with electricity as a kind of emblem while the nurses and doctors were running around desperately trying to cope with the scale of, you know, a London A& E.
And you're thinking, that really says it all. You know, someone has bought a license for something that's going to be on this screen instead of just being there. sticking a piece of paper on the wall and now, you know, the license has expired and it doesn't work. Maybe the company's gone bust. Who knows?
And I think that's a real problem is, you know, thinking about where we put the money. And actually for some things like health, the law is another area I think that I think we need to be very careful not to allow ourselves to be conned into believing that we're not going to need people in the future. Recognise that, you know, that these are vitally human professions, if you like, and that technology can help the humans, but it's not [00:32:00] going to replace them.
And we really need to think about whether it works. I think always ask, does it work?
Yeah, yeah, that's it. And that it doesn't fill, fill the void. where, you know, doctors and nurses and physios should be, for example. Like, I'm seeing them here become quite prominent they're being marketed around, in terms of endometriosis around regional and rural women, women who can't get access to metropolitan clinicians and, and gynaes, or simply can't travel 500k
so yeah, it's kind of like, oh, well. That'll solve the problems for them. And it's like, well. Telehealth's one thing, and that's a great service, and I think it's actually revolutionised care in a great way, in a lot of respects, through COVID, but also, it's not enough, you know to have that in a chatbot.
Susie: Yeah, absolutely, and I think it's just, it's not all or nothing, is the thing, that, you know, and health care costs money.
Kel: Exactly, especially when it's complex, for sure. [00:33:00]
So, looking ahead to the future what do you see as, like the biggest battles between, human rights and, and AI, particularly for women and how do you think we can prepare for them?
Susie: I think it, it's complicated and you know, to give a real lawyer's answer, it depends. It partly depends where you are and and who you are, I suppose. I think there are, There are huge battles coming.
There are huge battles that are underway. And certainly sort of in Europe, where we have the general data protection regulation, where data protection has been very strongly regulated. You know, we're still seeing what data protection regulation means for the protection of human rights more broadly through cases coming forward.
And we do see, we are seeing shifts already. I think one of the big areas for women is the way that women are [00:34:00] portrayed and approached, if you like, in the online environment, because I think how women are seen around the world in this sort of online space is really affecting how women are seen and treated in the real world.
So tackling, for example, the systems that underline sort of the pushing of content algorithmically, whether that's pushing misogynistic content on young men and boys in order to, to sort of turn them against women, or whether it's pushing sort of toxic, self hating kind of content on young women and girls affecting their mental health.
I think we really need to change those systems and to change our information environment so that it's not a kind of cesspool of radicalisation, which at the moment it is not just because of individual bits of content, but because of the way people are targeted, [00:35:00] sort of profiled and targeted and bombarded with messages that then affect how they think about themselves and how they think about each other.
And in this sort of new wave of generative ai, you know, perhaps unsurprisingly, one of the immediate outcomes has been the surge in deep fake pornography and image-based sexual abuse online against women and girls. But you know, when you look at that non-consensual deep fake sexual imagery in the uk.
A couple of months ago, previous government announced that they were going to make that illegal as, as part of the, kind of, the development of the Online Safety Act. And overnight, one of the main websites providing those services was unavailable. in the UK because of pending regulatory and legal changes.
So for me, that was a real sign [00:36:00] that changing the law works. Addressing these things very clearly in the law changes the direction of things. It's not going to get rid of image-based sexual abuse. non consensual deep fake images of women, but it will reduce it. If people realize that it's unacceptable, that it's illegal, that you could go to prison for doing this.
Most people don't want to do that. You know, there are clearly people who commit criminal offenses always, but most people don't want to do that. And so making it very clear where the legal lines of criminality are, I think will make a big difference.
And perhaps another area which I feel quite hopeful about is in terms of enforcement.
So one tool that the U. S. Federal Trade Commission has been using when it finds a product or platform to be acting unlawfully is what's called algorithmic disgorgement, where instead of just having to pay a [00:37:00] fine and carry on with business as usual, They order the company to destroy their algorithm, to destroy their model.
And I think that way we will see a shift. I think it really helps to focus the mind at the stage of developing and researching a product. If you understand that if you get it wrong and step the wrong side of the line, then all of this money and your entire business will be destroyed. So I think that as well is a sort of, for me, a hopeful step in the right direction of making it clear that laws and regulations do apply to technology and that they will be enforced seriously.
Kel: Yeah. Yeah. And, and that is something to definitely bring a beacon of hope and to, to empower us to realise that, hey, we've got choices and we can make them. And we, it's, it's within our power. And it's important to remember that I think every day it's, you know, these algorithms are shaping our minds the way we think, [00:38:00] the way we, we move in, in such subtle, nuanced ways.
It crawls up your brainstem. I mean, it's, it's it's, it's hard, isn't it? Right? It's, it's a battle. So yeah. To, to hear. those wins is obviously, yeah hopeful. And the insights you shared, Susie from your new book, Human Rights, Robot Wrongs, I think they really just challenge us to reflect on not only how we use the technology, but like how it shapes every aspect of our lives in return, our relationships, everything we do.
And yeah, for everyone looking to equip themselves with the knowledge, which I recommend you do, especially women, on how to navigate the complexities of technology. I think a lot of women think it's not their business, to be honest. I came from a tech background. I worked in a team of like 75 people.
It was just three women. The guys used to take bets on how long it'd be before you cried in the toilet. So yeah, I know what it's like to be in that environment. It's a very bro ish environment and you take such a human first approach with this [00:39:00] and I'm just so grateful for, for your work and for you joining me today.
Susie: It's been a pleasure. Thank you so much for having me.
Kel: And that, dear listeners, was my conversation with Susie Alegre.
I really hope you enjoyed listening to this conversation as much as I did being a part of it. It was truly enlightening to hear Susie's insights and I really recommend reading both of her books as the generosity of spirit that she has just shines through in every page.
To find out more about Susie's work, including where to find both books, please visit the show notes. Until next time, I'm Kel Myers and this is Phoenix Sound.
More information on Susie Alegre:
Books: https://linktr.ee/susiealegre
Website: www.susiealegre.com
Consulting website: https://alegre.ai
TRANSCRIPT:
Kel: [00:00:00] I'm Kel Myers, and this is Phoenix Sound. Joining me in conversation today is Susie Alegre, international human rights lawyer and author of the new book, Human Rights, Robot Wrongs, Being Human in the Age of AI, in which she explores the ways artificial intelligence is starting to shape every aspect of our daily lives, from how we think to who we love.
As the atmosphere of fear and hysteria around AI grows, it's apparent we need more nuanced and well-informed discussions around this issue. Today, we'll explore what human rights are, the potential threat AI poses to our human rights, and some of the proactive measures we can take to ensure these rights are protected.
Stay with us.
So thanks again, Susie I'll just give you a background on how I first discovered you. I first came across your work in 2022, when you published your first book, Freedom to Think, in which you chart the history and importance of freedom of thought and how that basic human right is something that needs protecting and it just completely opened my mind up and gave language to a lot of things I think I've been feeling.
And I heard a comedian say lately that he's got a lot of vibes – he just doesn't have data to back it up. And I was kind of like, in that space, I've got all these vibes, like, something doesn't feel right, but that definitely helped to articulate what the problem was there.
And your work called into question, I think, our growing over-reliance on technology and how it can really just compromise our ability to think for ourselves and then, you know, fast forward two years and we find ourselves living through, an era of rapid AI advancement.
It's moving at a breakneck speed.
You know, people are getting AI partners, AI pets, chat GPTs, got hundreds of millions of active users.
And I think, as our collective enchantment around the potential of these technologies grows, so does, obviously, the need to know where we stand as humanity in relation to them, which is why I think the central question at the heart of your second book, Human Rights, Robot Wrongs, is such an important and also a refreshing reality check.
In the introduction you write, 'The question I ask in this book is not, What is AI and how can we constrain it? The question is, what is humanity, and what do we need to do to protect it?'
So, let's start there, with us, people. What are human rights, and why do they need protecting?
Susie: Well, human rights are often sort of talked about as if they were some nebulous idea, but human rights are a set of Rights and freedoms that are now set down in law.
So since 1948, with the Universal Declaration on Human Rights, sort of in the aftermath of the Second World War, people and countries from around the world came together to discuss what they could do to make sure that these kind of horrors never ever happened again and to write, if you like, a list of all the rights and freedoms that we need to enjoy our humanity and to flourish as human beings, regardless of who we are or where we are on the planet.
And so, the Universal Declaration on Human Rights was really the first time that those rights had been codified clearly in international law. I mean, historically, we'd seen things like the Declaration on the Rights of Man, you know, through the Enlightenment, where people had started to realise that there must be these rights that we need to be human.
But since the Universal Declaration on Human Rights, we've seen laws both like, the International Covenant on Civil and Political Rights, the European Convention on Human Rights, and in the UK, now the Human Rights Act, coming into force and really putting legal guarantees for these rights.
And the kind of rights, we see in these, in these documents and in these laws are quite wide ranging. So they include things like the right to private and family life, the right to liberty, the right to freedom of thought, as you mentioned, which I explored a lot in my first book, and also rights related to things like freedom of association and how [00:04:00] we interact with each other, freedom of expression, access to justice.
Another crucial one, which sort of almost seems like it's a given, but is the right to life, but the right to life is enshrined in and protected by law and by many laws in different ways. And so those human rights are really what we need to allow us to live, to develop, and to flourish as humans, as individuals, and as societies.
Kel: At the start of the book you explain how AI is very much a gender based issue and it is quite scary to read and to see how that's moved, you know, like over a millennia from myth into something that's happening in reality. Would you be open to reading us the opening section of chapter one Being Human?
Would that be alright?
Susie: Yes, absolutely, I'd be delighted.
In his 1972 novel, The Stepford [00:05:00] Wives, Ira Levin created a dystopian world in which a town full of men, led by Diz, a former Disneyland roboticist, replaced their wives with robots. It was a tale situated within a brewing backlash against the women's liberation movement of the 1960s, but it built upon a cultural phenomenon dating back millennia, the fantasy of replacing women with automata.
In ancient Cypriot mythology, King Pygmalion was so repulsed by real women, he decided to create a perfect female sculpture, Galatea, to love instead.
The goddess Aphrodite helpfully breathes life into the marble so that the king and his sculpture could start a family and live happily ever after.
The Stepford Wives is a modern day retelling of the myth, and the 2004 film version places it firmly in the world in which we live today, with Mike, a former Microsoft executive, Taking the lead, and [00:06:00] smart houses and a robo puppy, completing the perfect suburban picture created by the robot wives.
It is a toxic cocktail of idealised womanhood, misogyny and automation. And it is a phenomenon that has crossed over from myth and fiction into the reality of tech innovation that we live with every day. Described by researchers from Radboud University Nijmegen as Pygmalion Displacement, A process of humanising AI that dehumanises women in particular.
Once you start to look at technology through the Pygmalion lens, you will see it is all around you. Just ask Alexa.
Kel: Thanks, Susie.
I was pretty much shaking my head the entire time that you were reading that.
Yeah, thank, thank you. It's very, very powerful opening there to the book and Thank you. Yeah, yeah, it's incredible.
And I've been reading out to all my friends. And it's not really a case of whether or [00:07:00] not they want to, so maybe that's me impeding on their human rights, I don't know, but you need to hear this, you're gonna hear it.
Susie: As they can walk away, it's fine.
Kel: Yeah, yeah
in the book, and just prior, you talked about the UDHR, or the Universal Declaration of Human Rights, as this global blueprint for societies and the world and how we should function with respect to human dignity and justice.
Could you please explain to our audience what the UDHR, is more broadly and how it serves to protect us, especially at this, at this time when we've got so many AI powered threats in the world.
Susie: Well, as I said earlier, the Universal Declaration on Human Rights, or UDHR, is the kind of foundational document of modern human rights that was agreed in 1948, in the aftermath of the Second World War Eleanor Roosevelt being the chair of the drafting committee, which included [00:08:00] members from all around the world, from Russia, from China, from Lebanon, from Canada, the UK and Latin America, as well as Eleanor Roosevelt, obviously from the United States.
It's a document that includes not only civil and political rights, the kind of rights you might think about when you're thinking of human rights, like the right to liberty, prohibition on torture, prohibition on slavery, but it also includes a lot of what are called economic, social and cultural rights.
So things that are very much about things like the right to work, the right to rest, the right to education, and the right to health, crucially.
So it's a really quite a broad document, including the full range of human rights, and recognising that human rights operate between each other, if you like.
So if you don't have equality, then you may well not have the right to health or you may not be enjoying the right to health if you don't have [00:09:00] equal access, for example, to health care facilities. And so it's really this idea that if we want to flourish as human beings, then we need to make sure that these rights are guaranteed going forward.
One of the things that is really important to bear in mind about human rights law and the way that it's developed since the UDHR in 1948 is what's called the Living Tree Doctrine on human rights law, which means that human rights develop as our societies develop.
So we're not stuck with the ideas of human rights that people living in the 1940s might have had, particularly things like gender equality or LGBTQ+ rights have advanced radically since the 1940s.
And human rights law has evolved with those changes in society.
So, for example the place where I'm from, the Isle of Man, in the 1970s, still had corporal punishment for young men as a judicial remedy.
And a young man who was sentenced to be birched in the Isle of Man took his case to the European Court of Human Rights in Strasbourg, claiming that birching was inhuman and degrading punishment.
The Strasbourg court, when it looked at the case, found, and this is what I think is really important to understand about human rights, that while birching or corporal punishment had been pretty common in the 1950s in Europe when the European Convention on Human Rights was drafted, by the 1970s the Isle of Man was a real outlier, and that in that context it had gone from being something which was quite a normal punishment to something that amounted to a degrading punishment and was then unlawful.
And so that really matters when we're thinking about human rights in the digital world. And that you might hear people saying, ‘well, you know, tech is moving so fast that the law can't keep up.’
Well, human rights law evolves to meet the changing society. You don't have to rewrite human rights. You have to interpret it in light of what's happening in society.
And today, very clearly, what's happening in our societies includes the digital and technological advancements and the AI advancements you were talking about at the start.
Kel: Yeah, and that's such an interesting distinction, I think to look at it that way, rather than think like, oh, you know, yeah, we're not keeping up to, to imagine it as this ever-evolving thing, like, like a plant or a tree, and it's kind of living organism, I guess.
Susie: No, absolutely, and I think, again, one of the narratives that you'll find about laws not keeping up, it's often not about the laws keeping up, it's about the fact that you know, justice moves slowly and access to justice is not, unfortunately, a given. And so you'll see that the interpretation of our laws may be moving more slowly.
That doesn't mean that they don't apply.
Kel: Yeah. Absolutely.
In the book, you talk about the environmental impact of digital technologies, and I think that this is often a blind spot, and I can completely understand why when, you just go to the Apple store, or you just order a new phone online, and you're not really considering, like, where is it coming from?
I think a lot of the time it says designed in California or designed in the USA.
Most people I think are aware that it's made in China, but we don't really think about the consequences deeper than that in terms of, of the impacts.
And I was really struck by, chapter eight of your book entitled Magical Pixie Dust, where you write, quote:
'our obsession with technology as an easy way of making our lives smoother is fed by the fallacy that virtual worlds are somehow greener.
The truth is uglier. If we want a sustainable future on earth, we cannot afford to look away.'
What exactly are we looking away from?
Susie: Well, we're looking away from an awful lot of things, so it's almost not looking away, it's burying our heads in the sand, essentially.
You know, one thing is the question of the supply chain.
You know, the tech that we carry around in our pockets is made up of minerals, chemicals, you know, substances that have ultimately been dragged out of the earth, not just magically appeared in the Apple shop or whatever other tech shop you're going to.
And what we can see is that, you know, the ICT industry involves really horrendous examples of illegal or unregulated mining, particularly in the developing world.
So an awful lot of the vital minerals and components of our technology are being dug out of the ground in unregulated mines in places like the Democratic Republic of Congo, using indentured labour, child labour, you know, people working in incredibly dangerous conditions in ways that are really significantly breaching their rights.
And we'll see as well in terms of the manufacturing of the components of our technology.
You know, when you look at even Silicon Valley, for example, Silicon Valley is one of the most polluted areas in the United States, precisely because it's where the silicon was dug up and processed in the United States.
And there have been reports of really serious health impacts of the manufacturing of tech products, including impacts on reproductive health, which can have consequences not only for the people working in the factories, but for their entire families and for generations.
And then we see also supply chain questions when you look at things like artificial intelligence and the training of artificial intelligence, which we're sort of told is a mechanical and again a slightly magical feat.
But actually things like content moderation, for example, to train generative AI to understand what is and is not acceptable if you like, in human language and in human society.
That work is done by using what are known as click workers, often again in the developing world, to look at horrendous tranches of content scraped off the internet in order to filter it out so that the product that we get to see when we look at a tool like ChatGPT is a kind of sanitised version of what's been taken off the internet.
But another and I think really important thing to bear in mind is that when we're done with our technology, you know, when your phone becomes obsolete, that in itself is an environmental disaster, that the scale that we are using and throwing away technology at - all of that stuff, all of that hardware is going back into the earth, potentially either to be buried in toxic landfill or to be recycled again in an unregulated way often with implications for child rights and for reproductive rights of people working in this environment or living around it.
And then one of the things that is just recently, actually, since the book came out, really showing itself is the power intensity and the water usage of AI itself.
And so while we're being sold this idea that AI, particularly generative AI in the last two years, which has really captured the public imagination, is going to turbocharge productivity, we're not really seeing really great examples of how it's improving productivity, but what it is doing is massively increasing the energy use of the companies that are producing it.
And so we've seen this year companies like Google and Microsoft reporting massive spikes in their energy use.
And similarly, the way that generative AI and AI works is through data processing and massive data processing centres, which is scattered around the world. And we're seeing those data processing centres being built often in areas which are already suffering from drought.
So we've seen problems in places like Spain and also in Latin America, where local communities have been protesting against new builds of data centres, which will effectively take all the water in places where people are struggling to have enough drinking water as it is.
So the environmental consequences are all around sort of from start to finish and everything in between. And so, as I say, it's almost like we're burying our heads in the sand if we don't look at this.
And I think that, you know, just looking at the, the energy usage of for example, AI search over a standard search - I think last year there was an estimate that it was about 10 times more energy intensive to just run a search on an AI search engine instead of using a sort of standard Google search engine.
I don't know this year what that is because it seems like the scale and the intensity of the power usage is escalating as the models themselves escalate.
But it is when you think about it, it's a bit like sort of taking Concorde to the corner shop using an AI search to just, you know, find out.
Kel: It's a great way of putting it, yeah.
Susie: You know, find out where your nearest supermarket is or something, you know, it's really, you know, I think we need to think really, really carefully about what it's for, why we need it and what the costs are.
Kel: Right? Yeah, definitely. I was thinking that with regards to, yeah, the, the energy use and I saw something in the Guardian with regards to Ireland this past couple of weeks and their energy. Yeah. So, so just thinking about the fact that you can just kind of like, I don't know, use ChatGPT, like the free version and anyone can do it just to mess about and not even considering the implications.
Susie: Yeah, writing pointless haikus, it's whatever.
Kel: Yeah, exactly. .Please write me a poem like I'm Rumi’ you know,
Susie: Exactly. About the environment.
Kel: Yeah. Yeah. Yeah. And once we like pull that veil back and we can, we can really see what's going on, we start to look at the reality of it - I'll be honest, I think a lot of people feel overwhelmed.
I think when you see like the cobalt mining, I mean, I recommend anyone just go and YouTube that and see for yourself some of the images and the videos of the conditions of these, these people are living and working and it's, it's. disgusting and disgraceful to see and also I think a lot of people, you know, in the West, especially, were just like, ‘Oh, well, oh, poor things. Oh, that's so sad.’
You know, we've got nothing but pity to offer.
And, you know, I don't think that's helpful for us to feel disempowered either, like we can't really do anything.
So I'm wondering if there's any, like, practical ways that you think that we can actually make a difference.
Like, do you think there's anything we can do to participate to make it more sustainable?
Or is it just a case of like, you know, pulling the veil back and, and discerning for yourself how you can do your best?
Susie: I think on an individual level it's always gonna be a challenge, but you know, I mean, one thing you can do is buy less tech, throw less stuff away, or recycle more.
Kel: Yeah.
Susie: Which, I mean, I know that sounds a bit you know, a bit facile, but you know, there are companies that are trying to make a difference.
So, for example, there's one called Fairphone, where they are trying to design a phone, which is, you know, fair in terms of the way it's made and the supply chains as much as possible, and is also repairable.
And designed to last at least 10 years and for, you know, the different components to be swapped out if they're not working.
So, you know, I think there are companies that are going in that direction.
And we're also starting to see legislation, for example, in the EU about environmental and human rights sustainability more generally in consumer products, but which can apply to tech products.
And I think what we can always do as individuals is, is make ourselves aware of what's happening and also, you know, ask our politicians and our lawmakers to protect people and, you know, tell them what we want, the societies we want to live in because ultimately, you know, regulation can help very much to push industries in different directions.
Kel: Yeah, yeah, hopefully. I heard Sam Altman talk in an interview about how he's really big on nuclear, whatever that means, like but yeah, it's interesting to think how, like, they're thinking about energy and, and just the scale of it was, I think it just, yeah, it was quite telling, really.
Susie: one of the things I do think is that at the moment, and, you know, while I was writing the book the really big hype cycle on AI and generative AI in particular was in full swing.
I am starting to sense that that might be changing, you know, when people are, instead of being terrified that they're going to be left behind the curve, people are starting to now ask questions about productivity, about environmental impact about human impact.
And so, you know, it may well be that what we've seen in the last year or two we won't be seeing in the next couple of years.
You know these things can come and go quite fast and it wasn't so long ago that we were all going to be living our lives in the metaverse.
So things can change quite fast.
Kel: Moving on to women's health apps and the risks of AI in healthcare diagnostics - this is something that's close to my heart personally as someone who's got stage four endometriosis and has been seeing the rise of, of these kinds of apps, literally over the past 12 months, I'd say.
They seem to have just kind of like there's been a tsunami of them.
And in your book you talk of the need to look carefully before we rush to incorporate AI into every aspect of our lives, and you know, a lot of these apps are claiming to aid health and wellness or solve what seem to be, as we them as unsolvable problems.
So I'm wondering what you see as being the risks and benefits of using AI powered apps in health diagnostics and care, particularly for diseases like endometriosis and things where there is no cure and there's no known cause, and there's a lot of uncertainty.
Susie: Yeah, no, I mean, I think there are a large amount of risks.
And you know, the health space is one where we're often told that, you know, there are great opportunities for AI to sort of, to solve cancer or, you know, whatever, whatever it is, it's, you know, this great push.
I think one of the big risks is, you know, firstly, AI is lots of different things, you know, AI is not just a chat bot or you know an app on your phone, but those are the things that are the easiest to deploy and to sell and to sell at scale.
One of the things I looked at in my first book in particular was period tracker apps.
And I mean, the monitoring of women's wellness is just massive, massive money through apps and they're are you know, several big problems with that.
You know, there have been some of those apps that have run into problems with privacy, where it's been clear that they didn't hold data securely, so they're gathering really sensitive data that might, and you know, when you look, now at places like the US where you're gathering data that might indicate that somebody has gone for an abortion or was looking for an abortion, the kind of data which could land, land you up in, in jail in some places, but also just sort of intimate information about people's sex lives and about their health, which might be used against them when it's sold on.
And we see that the same in, in mental health apps.
But aside from the sort of the privacy questions and the security questions, I mean, one of the big issues with a lot of these apps is, is whether or not they're actually real. And, you know, false advertising, you know, the term snake oil came from false advertising of snake oil for medicinal purposes and AI and tech based snake oil is all around us.
I mean, one example of an app that the Federal Trade Commission recently found to be unlawful because it was had absolutely no or very, very little evidence to show that it worked, was an app that claimed that it could diagnose sexually transmitted diseases by uploading a dick pic. So effectively, you just take a photo of your prospective partner's penis and it will tell you immediately whether or not they might have an STD.
Unsurprisingly, the Federal Trade Commission found that there was very little evidence to demonstrate that that was in any way true. And that app is no longer available.
But those kind of things, you know, they are everywhere. And you know, it is a real challenge, but going back to that question about tech outstripping the law, you know, the law on false advertising and fraud remains in place.
It might just take a bit longer to uncover or to enforce in these kinds of cases. But I think that's a real problem of sort of tech optimism saying, you know, wow, we've got all this amazing tech, so we're going to be able to solve all of your health problems and all of your mental health problems, physical health problems.
That's not necessarily true.
Kel: Yeah, yeah. And it's all in the wording as well. I came across, my partner is a nurse, and she came across something the other day. She's like, you need to check this out. It's a new website and app in, I think it's coming out of the States. And It's supposedly you can find a cure for endometriosis and it's like, it takes years to get a diagnosis.
We'll get you an answer in days. That's like, that's amazing. That's great. If only guys.
Susie: Just upload a selfie.
Kel: Yeah, yeah, right. That's it. Yeah, that's perfect. Yeah, good. That sounds great. If only, yeah, why didn't we just think of that earlier? Like this woman, she's got it sorted. Yeah, anyway, Yeah, closing out this second to last chapter you write, and I think this is just so poignant, 'at some point we need to ask the big questions. What is the point? Where will it take us? Is it worth the cost? Technology is not inherently bad, but it's not a panacea for all our problems.'
So, yeah, what are the risks of viewing it in such idealistic terms? Because I'll be honest, I think a lot of people are here. That's my feeling in society here in Australia.
I think more so than I'm feeling from my friends in the UK and what they're experiencing. So, yeah, and I'm thinking about the example you share about Babylon Health. The health startup claiming to be a doctor in your pocket app.
Yeah, well Babylon health was, was precisely that. It was a sort of online GP service, but it was, it was something again that was really taken up around the world, including here in the UK with, you know, billions [00:29:00] being pumped into it as this is going to be the future of healthcare because, you know, it means that everyone can sort of see a doctor immediately and get this sort of immediate diagnosis.
But Babylon, you know, despite its success and promise effectively folded. I think it was last year, two years ago. Because ultimately the tech wasn't there. It was all sort of smoke and mirrors. So the information that it was trying to use was sort of information that had been inputted into an Excel sheet by some real doctors in the back office, but that wasn't capable of adjusting to kind of the complexity and the subtlety of real life patients who don't necessarily explain themselves or tell you things in the exact terms that have been inputted sort of into the Excel spreadsheet.
So I think there's a real danger of believing, really wanting to believe that we can solve these problems with technology.
And I mean, another very famous example [00:30:00] was Theranos, you know, claiming that they could do blood tests with a tiny drop of blood from a prick of the finger. You know, huge hype, the future of healthcare, absolutely revolutionary. And you know, it's founders then landed up in jail because ultimately it wasn't true.
The science wasn't there. Just wanting it to be true is not enough to make it, to make it happen. And I think another danger that we can see is where you land up with money being pumped into the technology instead of the people that we need in the healthcare system with this idea that it's going to be cheaper, it's going to revolutionise everything.
It's going to be super efficient.
And you know, I remember last year, I unfortunately had to spend an evening in an emergency room in the UK, staring at a screen, a massive screen on the wall, which read Your license has expired, please contact your administrator. [00:31:00] And you know, this screen was on the wall in A& E lit up with electricity as a kind of emblem while the nurses and doctors were running around desperately trying to cope with the scale of, you know, a London A& E.
And you're thinking, that really says it all. You know, someone has bought a license for something that's going to be on this screen instead of just being there. sticking a piece of paper on the wall and now, you know, the license has expired and it doesn't work. Maybe the company's gone bust. Who knows?
And I think that's a real problem is, you know, thinking about where we put the money. And actually for some things like health, the law is another area I think that I think we need to be very careful not to allow ourselves to be conned into believing that we're not going to need people in the future. Recognise that, you know, that these are vitally human professions, if you like, and that technology can help the humans, but it's not [00:32:00] going to replace them.
And we really need to think about whether it works. I think always ask, does it work?
Yeah, yeah, that's it. And that it doesn't fill, fill the void. where, you know, doctors and nurses and physios should be, for example. Like, I'm seeing them here become quite prominent they're being marketed around, in terms of endometriosis around regional and rural women, women who can't get access to metropolitan clinicians and, and gynaes, or simply can't travel 500k
so yeah, it's kind of like, oh, well. That'll solve the problems for them. And it's like, well. Telehealth's one thing, and that's a great service, and I think it's actually revolutionised care in a great way, in a lot of respects, through COVID, but also, it's not enough, you know to have that in a chatbot.
Susie: Yeah, absolutely, and I think it's just, it's not all or nothing, is the thing, that, you know, and health care costs money.
Kel: Exactly, especially when it's complex, for sure. [00:33:00]
So, looking ahead to the future what do you see as, like the biggest battles between, human rights and, and AI, particularly for women and how do you think we can prepare for them?
Susie: I think it, it's complicated and you know, to give a real lawyer's answer, it depends. It partly depends where you are and and who you are, I suppose. I think there are, There are huge battles coming.
There are huge battles that are underway. And certainly sort of in Europe, where we have the general data protection regulation, where data protection has been very strongly regulated. You know, we're still seeing what data protection regulation means for the protection of human rights more broadly through cases coming forward.
And we do see, we are seeing shifts already. I think one of the big areas for women is the way that women are [00:34:00] portrayed and approached, if you like, in the online environment, because I think how women are seen around the world in this sort of online space is really affecting how women are seen and treated in the real world.
So tackling, for example, the systems that underline sort of the pushing of content algorithmically, whether that's pushing misogynistic content on young men and boys in order to, to sort of turn them against women, or whether it's pushing sort of toxic, self hating kind of content on young women and girls affecting their mental health.
I think we really need to change those systems and to change our information environment so that it's not a kind of cesspool of radicalisation, which at the moment it is not just because of individual bits of content, but because of the way people are targeted, [00:35:00] sort of profiled and targeted and bombarded with messages that then affect how they think about themselves and how they think about each other.
And in this sort of new wave of generative ai, you know, perhaps unsurprisingly, one of the immediate outcomes has been the surge in deep fake pornography and image-based sexual abuse online against women and girls. But you know, when you look at that non-consensual deep fake sexual imagery in the uk.
A couple of months ago, previous government announced that they were going to make that illegal as, as part of the, kind of, the development of the Online Safety Act. And overnight, one of the main websites providing those services was unavailable. in the UK because of pending regulatory and legal changes.
So for me, that was a real sign [00:36:00] that changing the law works. Addressing these things very clearly in the law changes the direction of things. It's not going to get rid of image-based sexual abuse. non consensual deep fake images of women, but it will reduce it. If people realize that it's unacceptable, that it's illegal, that you could go to prison for doing this.
Most people don't want to do that. You know, there are clearly people who commit criminal offenses always, but most people don't want to do that. And so making it very clear where the legal lines of criminality are, I think will make a big difference.
And perhaps another area which I feel quite hopeful about is in terms of enforcement.
So one tool that the U. S. Federal Trade Commission has been using when it finds a product or platform to be acting unlawfully is what's called algorithmic disgorgement, where instead of just having to pay a [00:37:00] fine and carry on with business as usual, They order the company to destroy their algorithm, to destroy their model.
And I think that way we will see a shift. I think it really helps to focus the mind at the stage of developing and researching a product. If you understand that if you get it wrong and step the wrong side of the line, then all of this money and your entire business will be destroyed. So I think that as well is a sort of, for me, a hopeful step in the right direction of making it clear that laws and regulations do apply to technology and that they will be enforced seriously.
Kel: Yeah. Yeah. And, and that is something to definitely bring a beacon of hope and to, to empower us to realise that, hey, we've got choices and we can make them. And we, it's, it's within our power. And it's important to remember that I think every day it's, you know, these algorithms are shaping our minds the way we think, [00:38:00] the way we, we move in, in such subtle, nuanced ways.
It crawls up your brainstem. I mean, it's, it's it's, it's hard, isn't it? Right? It's, it's a battle. So yeah. To, to hear. those wins is obviously, yeah hopeful. And the insights you shared, Susie from your new book, Human Rights, Robot Wrongs, I think they really just challenge us to reflect on not only how we use the technology, but like how it shapes every aspect of our lives in return, our relationships, everything we do.
And yeah, for everyone looking to equip themselves with the knowledge, which I recommend you do, especially women, on how to navigate the complexities of technology. I think a lot of women think it's not their business, to be honest. I came from a tech background. I worked in a team of like 75 people.
It was just three women. The guys used to take bets on how long it'd be before you cried in the toilet. So yeah, I know what it's like to be in that environment. It's a very bro ish environment and you take such a human first approach with this [00:39:00] and I'm just so grateful for, for your work and for you joining me today.
Susie: It's been a pleasure. Thank you so much for having me.
Kel: And that, dear listeners, was my conversation with Susie Alegre.
I really hope you enjoyed listening to this conversation as much as I did being a part of it. It was truly enlightening to hear Susie's insights and I really recommend reading both of her books as the generosity of spirit that she has just shines through in every page.
To find out more about Susie's work, including where to find both books, please visit the show notes. Until next time, I'm Kel Myers and this is Phoenix Sound.