Share Response-ability.Tech
Share to email
Share to Facebook
Share to X
By Dawn Walter
5
11 ratings
The podcast currently has 43 episodes available.
In this episode, we're in conversation with feminist scholar and activist, Radhika Radhakrishnan. Radhika is a PhD student at the Massachusetts Institute of Technology (MIT) in the HASTS (History, Anthropology, Science, Technology & Society) programme. This programme uses methods from history and anthropology to study how science and technology shape – and are shaped by – the world we live in.
Trained in Gender Studies and Computer Science engineering in India, Radhika has worked for over five years with civil society organisations to study the intersections of gender justice and digital technologies using feminist, qualitative research methodologies.
Her research focuses on understanding the challenges faced by gender-minoritized communities with emerging digital technologies in India and finding entry points to intervene meaningfully. Her scholarship has spanned the domains of Artificial Intelligence, data governance pertaining to surveillance technologies and health data, and feminist Internets, among others.
Radhika shares with us what she'll be researching for her PhD and why she moved away from computer science to social science.
In 2021 Radhika’s paper, “Experiments with Social Good: Feminist Critiques of Artificial Intelligence in Healthcare in India” was published in the journal, Catalyst, and we explore her findings, as well as why she was drawn to artificial intelligence in healthcare.
We also discuss her experiences of studying up (see Nader 1972) as a female researcher and some of the strategies she used to overcome these challenges.
Lastly, Radhika recommends Annihilation of Caste by B. R. Ambedkar, and explains why it's important that we openly discuss caste. (Check out this article in WIRED about caste in Silicon Valley.)
Follow Radhika on Twitter @so_radhikal, and connect with her on LinkedIn. Check out her website, and read her blog on Medium.
Our guest today is Susie Alegre. Susie is an international human rights lawyer and author. We're in conversation about her book, Freedom To Think: The Long Struggle to Liberate Our Minds (Atlantic Books, 2022). Susie talks about freedom of thought in the context of our digital age, human rights, surveillance capitalism, emotional AI, and AI ethics.
Susie explains why she wrote the book and why she thinks our freedom of thought is important in terms of our human rights in the digital age. We explore what freedom of thought is ("some people talk about it as mental privacy") and the difference between an absolute right and a qualified right, and why absolute rights are protected differently.
Susie shares some historical examples including witch trials as well as the work of Ewen Cameron, a Scottish psychiatrist in Canada, who experimented on ordinary people without their consent to explore ways to control the human mind.
Facial recognition technology is a modern attempt to get inside our heads and predict such things as our sexual orientation. Susie explains why researchers shouldn’t be experimenting with facial recognition or emotional AI: you’re “effectively opening Pandora’s box”.
Susie explains the difference between surveillance advertising, which uses data captured about our inner lives that is sold and auctioned on an open market, in order to manipulate us as individuals, and targeted advertising.
Over the past few years there’s been a great deal of focus on ethics and Susie suggests we need to move away from the discussion of ethics “back to the law, specifically human rights law”. She explains that human rights law is being constantly eroded, and says “one way of reducing the currency of human rights law is refocusing on ethics”. Ethics are simply a “good marketing tool” used by companies.
The inferences being made about us, the data profiling, the manipulation means it's practically impossible to avoid leaving traces of ourselves, it's beyond our personal control, and privacy settings don't help. In her book Susie suggests that by looking at digital rights (data and privacy protection) in terms of freedom of thought, "the solutions become simpler and more radical".
It’s a point that Mary Fitzgerald, in her review of Susie’s book in the Financial Times, suggested was a "unique contribution" to the debates about freedoms in the digital age, and that "reframing data privacy as our right to inner freedom of thought" might capture "the popular imagination" in a way that other initiatives like GDPR have failed to do. Susie explains for us how this approach would work.
Follow Susie on Twitter @susie_alegre, and check out her website susiealegre.com.
Read the full transcript.
Read the conversation as a web article.
Watch the interview on our YouTube channel.
Our guest today is Professor Veronica Barassi. Veronica is an anthropologist and author of Data Child Citizen (MIT Press, 2020).
Veronica campaigns and writes about the impact of data technologies and artificial intelligence on human rights and democracy. As a mother, Veronica was becoming increasingly concerned about the data being collected on her two children by digital platforms. Her research resulted in the book as well as a TED talk, What tech companies know about your kids, that’s had over 2 million views.
Since the publication of her book, she says there's been a huge acceleration in the datafication of children, partly due to the pandemic, and an increase in the ways in which AI technologies are being used to profile people.
Veronica explores what she believes anthropology uniquely brings to the study of data technologies and AI. She asks (and answers), “why would an anthropological approach be different from say, for instance, Virginia Eubanks, who uses ethnographic methodologies and has a real context-specific understanding of what's happening on the ground.”
Turning to anthropology’s (late) engagement with AI, data, and algorithms, she says it used to be a niche area of research. But “we’ve actually seen a reality check for anthropologists because these technologies are…involved in deeply problematic and hierarchical processes of meaning-construction and power-making that there's no way that anthropologists could shy away from this”.
One of the best books “that really makes us see things for what they are ["in this current time we’re living in"] is David Graeber’s The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy. Graeber “talks about how bureaucracy is actually there to construct social truth, but this type of bureaucratic work has been now replaced by algorithms and artificial intelligence”, a connection she tries to make in her article, David Graeber, Bureaucratic Violence and the Critique of Surveillance Capitalism.
We discuss how anthropologists can make their work both academically rigorous and accessible to the public, and she talks about her own personal experience of doing the TED talk and how she felt a responsibility to bring the topic of child datafication to a wider audience, campaigning, and raising awareness.
Veronica provokes anthropology scholars with a call to action given that one of her “major critiques of anthropology…is the fact that as anthropologists often shy away from engaging theoretically with disciplines that do not share their approach". And what does it mean when we say research is “not anthropological enough”?
Lastly, Veronica suggests that, given machines must be taught basic concepts, like what is a child (“as anthropologists, we know that these concepts are so complex, so culturally specific, so biased”), what anthropology can do is “highlight the way in which these technologies are always inevitably going to get and be biased”. She ends on a note of excitement: “We're going to see such great research emerging in the next few years. I'm actually looking forward to that”.
Follow Veronica on Twitter @veronicabarassi.
Read an edited version of our conversation together with reading list.
Our guest today is Laura Musgrave. Laura was named one of 100 Brilliant Women in AI Ethics™ for 2022. Laura is a digital anthropology and user experience (UX) researcher. Her research specialism is artificial intelligence, particularly data and privacy.
Laura gave a short talk at the inaugural conference in 2019 on privacy and convenience in the use of AI smart speakers. And at the 2021 event Laura chaired the panel, Data: Privacy and Responsibility.
We start our conversation by exploring Laura’s interest in data and privacy, and smart assistants in particular. During her research on smart speaker use in homes, she's noticed a shift in people’s attitudes and a growing public awareness around privacy and technology, and the use of AI. This shift, she feels, has been aided by documentaries like The Social Dilemma (despite well-founded criticisms such as this article by Ivana Bartoletti in the Huffington Post) and Coded Bias.
Laura talks about where the responsibility of privacy lies — with the technology companies, with the users, with the regulators — and that as a user researcher, she has a part to play in helping people understand what’s happening with their data.
I ask Laura what drew her to anthropology and how she thinks the research methods and lens of anthropology can be used to design responsible AI. She says, "The user researchers that really stood out to me very early on in my career were the anthropologists and ethnographers" because "the way that they looked at things…really showed a deep understanding of human behaviour". It "set the bar" for her, she explains, and she wanted to know: “How do I do what they do”.
Laura shares the book she’d recommend to user researchers, like her, who are starting out on their ethnographic journey, a book which helped her “make sense of how ethnography fitted into my everyday work “.
Because Laura’s been named one of the 100 Brilliant Women in AI Ethics™ for 2022, I ask her to share what the AI ethics landscape, with respect to data and privacy, looks like for 2022. As she explains, “in some senses it is much the same as last year but it's also a constantly developing space and there are constantly new initiatives” before sharing some of the key themes she thinks we are likely to see in 2022.
Lastly, Laura recommends two books, both published by Meatspace Press: Fake AI, and Data Justice and Covid-19: Global Perspectives. (The former we picked for our 2021 Recommended Reads and the latter for our 2020 Recommended Reads.)
You can connect with Laura on LinkedIn and on Twitter @lmusgrave.
Read an edited version of our conversation which you can read online and also download as a PDF.
Our guest today is Dr Rosie Webster. Rosie has a PhD and an MSc in health psychology. She’s currently Science Lead for Zinc’s venture builder programme. Prior to Zinc, Rosie worked as a UX researcher at digital health company, Zava, and was Lead User Researcher at Babylon Health.
While at Babylon, Rosie established the foundations of an effective Behavioural Science practice, which is partly what we’re here to talk about today.
Rosie explains that if businesses are interested in delivering impact and making a difference, then social science can be really key. She says that research, in similar ways to design, is often underestimated and under-utilised in tech. Our power, she says, lies in understanding the problem and what the right thing to build is. This is a truly user-centred approach that requires trusting in the process and being willing to scrap an idea when the research points in a different direction.
Often people don’t know what social science is, says Rosie, and equate it to academic research, with the corresponding but erroneous perception that it’s slow, when in actual fact it provides answers much more quickly.
Rosie explains how she established the beginnings of a behavioural science practice at Babylon Health, with the support of two managers who understood its value and importance. She shares why she wanted to ‘democratise’ behavioural research, the benefits of that approach, and how she ‘marketed and sold’ behavioural science within the company.
User research should utilise the existing academic literature more, “building on the shoulders of giants”, as Rosie calls it, “supercharging” primary research, and using evidence to understand what the solution might be. It’s an approach she says results in understanding people deeply, while increasing impact and reducing risk, and without slowing down the fast-paced product development environment.
As our conversation draws to an end, Rosie has a final piece of advice for businesses that are genuinely open to achieving impactful outcomes, and recommends two books for people who are looking to bring behavioural science into their work: Engaged by Amy Bucher, and Designing for Behaviour Change by Stephen Wendel.
Follow Rosie on Twitter @DrRosieW, and connect with her on LinkedIn.
Read an edited version of our conversation which you can read online and also download as a PDF.
My guest today is Dr Corinne Cath-Speth. Corinne is a cultural anthropologist whose research focuses on Internet infrastructure politics, engineering cultures, and technology policy and governance.
Corinne has recently completed their PhD at the Oxford Internet Institute (OII), which was titled, Changing Minds & Machines. It was an ethnographic study of internet governance, the culture(s) and politics of internet infrastructure, standardization and civil society.
Drawing on their research, Corinne gave a talk as part of an event series hosted by the Oxford Internet Institute which explored the opaque companies and technologists who exercise significant but rarely questioned power over the Internet. As Corinne said during their talk, this mostly unknown aspect of the Internet is “as important as platform accountability".
I invited Corinne onto the show to tell us more.
Using the Fastly incident in June, Corinne explains who and what these largely invisible, powerful Internet infrastructure companies are and how an outage can have a “large impact on the entirety of our online ecosystem”. The incident shows “how power is enacted through the functioning and maintenance of Internet infrastructure design.” Corinne goes on to say that “just because the Internet infrastructure is largely invisible to users doesn't mean that it's apolitical [in the case of Cloudflare and 8chan in particular] and it doesn't mean that these companies can claim neutrality”.
Corinne talks about their PhD dissertation and says, “I was really interested in understanding how the engineering cultures of infrastructure organizations influence what but also whose values end up steering technical discussions”. Their fieldwork was conducted in an organization called the Internet Engineering Taskforce (IETF). (Corinne brilliantly summarised their PhD in a series of tweets.)
Corinne explains what drew them to research this particular topic and notes that “it is so important to get at the personal drivers of our research and being really upfront and explicit about how those are key part of our research practice and the kind of decisions that we end up making.”
Corinne shares why they believe cultural anthropology is relevant “to questions of Internet infrastructure of politics and power”, saying “I believe that anthropology really can provide new, novel perspectives on current Internet infrastructure dilemmas, including those related to the connections between cultures and code.”
While there’s rightly concern about platform accountability or the power of tech companies, what many people don’t realise is that companies like Meta and Amazon are also infrastructure companies. We need to ask ourselves, says Corinne, “how comfortable we are with the fact that a handful of companies are starting to influence huge parts of the entire Internet”.
Corinne “really wants to encourage people” to study aspects of the Internet “because the last thing we want” is for a small number of companies to have “a say over many parts of our lives….And us not understanding how it happened”.
Lastly, Corinne says, “what we need is a balanced and well-resourced counter-power to the influence of corporate actors that are steering the future of the Internet”.
Further reading
Corinne has kindly supplied a list of resources and reading that they mentioned in the podcast.
Our guest today is Matt Artz. Matt is a business and design anthropologist, consultant, author, speaker, and creator. As a creator he creates podcasts, music, and visual art. Many people will know Matt through his Anthropology in Business and Anthro to UX podcasts.
We talk about his interdisciplinary educational background — he has degrees in Computer Information Systems, Biotechnology, Finance and Management Information Systems, and Applied Anthropology — and Matt explains what drew him along this path.
He shares his recent realisation that he identifies primarily as a technologist ("I am still at heart a technologist. I love technology. I love playing with technology") and his conflict around the "harm that comes out of some AI, but I'm also really interested in it and to some degree kind of helping to fuel the rise of it."
This leads to us discussing — in the context of recommender systems and Google more broadly — how we are forced to identify on the internet as one thing or another, either an anthropologist, a technologist, or a creator but not all three. As Matt explains, "finding an ideal way to brand yourself on the Internet is actually very critical...it's a real challenge".
We turn next to recommender systems and his interest in how capital and algorithmic bias contribute to inequality in the creator economy, which is based on his art market research as the Head of Product & Experience for Artmatcher. Artmatcher is a mobile app that aims to address access and inclusion issues in the art market.
The work being done on Artmatcher may lead to innovations in the way the approximately 50 million people worldwide in the Creator Economy get noticed in our "technologically-mediated world" as well as in other multi-sided markets (e.g. Uber, Airbnb) where there are multiple players. It's a model he hopes will ensure that people's "hard work really contributes to their own success".
Design anthropology is one approach to solving this challenge, Matt suggests, because it is "very interventionist, very much focused on what are we going to do to enact some kind of positive change".
As Matt says, "even if this [model] doesn't work, I do feel there's some value in just having the conversation about how can we value human behaviour and reward people for productive effort and how can we factor that back into the broader conversation of responsible tech or responsible AI?".
He recommends two books, Design Anthropology: Theory and Practice, edited by Wendy Gunn, Ton Otto, Rachel Charlotte Smith, and Media, Anthropology and Public Engagement, edited by Sarah Pink and Simone Abram.
Lastly, Matt leaves us with a hopeful note about what we can do in the face of "really hard challenges" such as climate change.
You can find Matt on his website, follow him on Twitter @MattArtzAnthro, and connect with him on LinkedIn.
Our guest today is Dr Nat Kendall-Taylor. Nat received his PhD in Anthropology at UCLA and in 2008 he joined the FrameWorks Institute, a non-profit research organisation in Washington, D.C., where he is now the CEO.
FrameWorks uses rigorous social science methods to study how people understand complex social issues such as climate change, justice reform, and the impact of poverty on early childhood development. It develops evidence-based techniques that help researchers, advocates, and practitioners explain them more effectively.
Nat explains what drew him from pre-med to anthropology. He did his PhD at UCLA because of the Anthropology department's "unapologetic focus on applied anthropology". His fieldwork in Kenya on children with seizure disorders explored the question of why so few sought biomedical treatment. His experience there, working with public health officials and others, demonstrated the value of understanding culture, the importance of multi-modal transdisciplinary perspectives, and the often "counterintuitive and frequently frustrating nature of communications when you're trying to do this kind of cross-cultural work".
For the past 18 months, FrameWorks has worked on how to frame and communicate the social impacts of artificial intelligence. The project came to FrameWorks through their long-term collaboration with the MacArthur Foundation when it became clear that some of their Grantees "had been having a lot of difficulty advancing their ideas" about algorithmic justice to the general public. The project has explored "the cultural models, the deep patterns of reasoning that either make it hard for people to appreciate the social implications" of AI as well as how to allow people to "engage with the issue in helpful and meaningful ways". The report will be publicly available on the FrameWorks website.
As Nat explains, if the public "doesn't understand what the thing is [artificial intelligence] that you are claiming has pernicious negative impacts on certain groups of people, then it becomes very hard to have a meaningful conversation about what those are, who is affected". This is compounded when "people don't really have a sense what structural or systemic racism means outside of a few issues, how that might work and what the outcomes of that might be."
Nat says their work "suggests that it is a responsibility, it's an obligation, for those who understand how these things work to bring the public along, and to deepen people's understanding of how [for example] using algorithms to make resourcing decisions...can be seriously problematic".
Nat recommends three books (Metaphors We Live By, Finding Culture in Talk, and Cultural Models in Language and Thought) and ends with a call for more anthropologists to work outside the academy where they can also do impactful work.
Read an edited excerpt [PDF] of this interview.
You can follow Nat on Twitter at @natkendallt and connect with him on LinkedIn. FrameWorks are on Twitter @FrameWorksInst.
Update: FrameWorks published “Communicating About the Social Implications of AI: A FrameWorks Strategic B
Our guest today is Dr Johannes Lenhard. Johannes received his PhD in Anthropology at Cambridge University and in 2017 started a post-doctoral research project, at the Max Planck Centre Cambridge for the Study of Ethics, the Economy and Social Change, on the ethics of venture capital investors.
Johannes spoke at the 2021 Response-ability Summit.
He shares what drew him to studying venture capitalists and how he does ethnography in this very closed, elite world across various field sites including Silicon Valley and London. Johannes explains that "not a single book" has been written about venture capitalists by someone who isn't one. As he says, "only an engaged anthropology" can enable someone to be both insider and outsider in this rarefied world.
Johannes explains the impact of the lack of diversity in venture capital since not only are VC's hiring people who look like them (white, male) but they "also reproduce themselves into who runs these tech companies". The issue of venture funding is explored by Johannes and Erika Brodnock in their book, Better Venture, which will be published later in 2021.
Johannes also briefly discusses Environmental Social and Corporate Governance (ESG) metrics which are starting to affect VC’s and the “aggregate confusion” identified by an MIT paper.
Johannes believes more scrutiny into venture capital investors is needed, saying "they are the ones deciding the big tech companies in the next 10-15 years....scrutinizing them now has an impact on everything in the future. They are the kingmakers, and we've been solely focussing on the kings, the Mark Zuckerbergs and the Jeff Bezos of this world".
Scrutiny, he explains, will benefit both society and the VC's themselves.
Drawing on his Medium post, "The Ultimate Primer on Venture Capital and Silicon Valley", Johannes shares his top reading picks for anyone eager to learn more: Doing Capitalism in the Innovation Economy by William Janeway; The Code by Margaret O’Mara; VC: An American History by Tom Nicholas; and a paper, "How Do Venture Capitalists Make Decisions?".
And lastly, Johannes explains why more academics "of any kind" are needed to study the world of venture capital investors.
You can follow Johannes on Twitter at @JFLenhard and connect with him on LinkedIn.
Academics and articles also mentioned in our conversation:
Our guest today is Lianne Potter. Lianne is an anthropologist, self-taught software developer, cyber security evangelist, and entrepreneur. Lianne works at Covea Insurance as their Information Security Transformation Manager where she advocates for innovation in the cyber security field.
Lianne's talk at the 2021 Response-ability Summit was titled, "Reciprocity: Why The Cyber Security Industry Needs to Hire More Anthropologists".
In this episode Lianne is in conversation with Isabelle Cotton, a digital anthropologist and social researcher, who was curious to interview Lianne for us. As Isabelle explains, "I was interested to talk to Lianne, who uses anthropology to humanise cybercrime. I find her acute awareness of the digital divide in all of the work she does particularly powerful. She has managed to carve out a space for anthropology in an industry that favours faceless data and numbers".
During their conversation Lianne explains why she's so passionate about the digital divide and why she believes a people-based, behavioural approach to cybersecurity is so important. Lianne also explains why the technical terms used in the industry can be off-putting to many general users and why she believes storytelling is a way to raise awareness and increase engagement.
Isabelle and Lianne also explore biometric security, two-factor authentication, and the 'culture' of hacking. Lastly, Lianne shares some advice for anthropologists looking to get into cybersecurity and tech more generally.
Follow Lianne on Twitter at @Tech_Soapbox and connect with her on LinkedIn.
Connect with Isabelle on LinkedIn and check out her website.
The podcast currently has 43 episodes available.