
Sign up to save your podcasts
Or
How do biased large language models (LLMs) impact women users in India, and what can be done to make these systems more fair and inclusive?
In Episode 01 of Season 3 of the CCG Tech Podcast, Shashank and Tejaswita speak to Aarushi Gupta about how LLMs are increasingly being used across sectors like education, healthcare, and agriculture. As these tools become default sources of knowledge and assistance, they discuss what the gendered consequences are of relying on AI systems that may replicate and reinforce existing inequalities.
The discussion unpacks the manifestation of gender bias across various stages of the LLM lifecycle, from training data and model development to real-world deployment. It highlights how such biases can restrict access to accurate information, reinforce discriminatory norms, and compromise user safety. The episode also considers possible mitigation strategies and identifies concrete steps that developers, policymakers, and other stakeholders can take to promote fairness, accountability, and inclusivity in the design and deployment of AI systems.
Aarushi Gupta is a Senior Research Manager at Digital Futures Lab. With expertise in AI ethics, gender relations, and digital governance, she spearheads key projects at DFL, bridging both theoretical and applied research. Her recent work delves into gender biases in large language models designed for Indian languages, with a focus on critical social sectors such as healthcare and agriculture.
Resources:
A Primer on Mitigating Gender Biases in LLMs: Insights from the Indian Context
Hosts: Shashank Mohan, Tejaswita Kharel
Editor: Gopika P
Fact Checker: Sukriti, Rahul Jayaraman
This podcast is created by the Centre for Communication Governance at NLUD. Reach out for any queries / suggestions at [email protected]
(The opinions expressed in the episode are personal to the speaker. The University does not subscribe to the views expressed in the episode and does not take any responsibility for the same.)
How do biased large language models (LLMs) impact women users in India, and what can be done to make these systems more fair and inclusive?
In Episode 01 of Season 3 of the CCG Tech Podcast, Shashank and Tejaswita speak to Aarushi Gupta about how LLMs are increasingly being used across sectors like education, healthcare, and agriculture. As these tools become default sources of knowledge and assistance, they discuss what the gendered consequences are of relying on AI systems that may replicate and reinforce existing inequalities.
The discussion unpacks the manifestation of gender bias across various stages of the LLM lifecycle, from training data and model development to real-world deployment. It highlights how such biases can restrict access to accurate information, reinforce discriminatory norms, and compromise user safety. The episode also considers possible mitigation strategies and identifies concrete steps that developers, policymakers, and other stakeholders can take to promote fairness, accountability, and inclusivity in the design and deployment of AI systems.
Aarushi Gupta is a Senior Research Manager at Digital Futures Lab. With expertise in AI ethics, gender relations, and digital governance, she spearheads key projects at DFL, bridging both theoretical and applied research. Her recent work delves into gender biases in large language models designed for Indian languages, with a focus on critical social sectors such as healthcare and agriculture.
Resources:
A Primer on Mitigating Gender Biases in LLMs: Insights from the Indian Context
Hosts: Shashank Mohan, Tejaswita Kharel
Editor: Gopika P
Fact Checker: Sukriti, Rahul Jayaraman
This podcast is created by the Centre for Communication Governance at NLUD. Reach out for any queries / suggestions at [email protected]
(The opinions expressed in the episode are personal to the speaker. The University does not subscribe to the views expressed in the episode and does not take any responsibility for the same.)