
Sign up to save your podcasts
Or
For this episode of Embedded Zain is joined by Rashida Richardson, a law and technology policy expert and Senior Counsel at MasterCard, where they discuss some of the ways AI systems reflect racial and cultural biases we see in society. Rashida also shares some of the policy interventions, governance, and training needed to tackle challenges like "dirty data" that we see in AI development, and gives her insights on how you can ensure responsible use. The conversation also explores the current legal landscape around AI, and specifically its implications when it comes to civil rights. Some really great insights here you don't want to miss, give it a listen!
About Our Guest
Professor Rashida Richardson is a law and technology policy expert who researches the social and civil rights implications of artificial intelligence and other data-driven technologies. She currently serves as senior counsel, artificial intelligence, at Mastercard. She has also previously served as attorney advisor to the chair of the Federal Trade Commission and senior policy advisor for data and democracy at the White House Office of Science and Technology Policy. Rashida has worked on AI research, policy, governance, and legal issues in academia, civil rights organisations, government, and industry. She also has extensive experience leading interdisciplinary teams and cross-sector collaborations.
Website: https://www.rashidarichardson.com/
LinkedIn: https://www.linkedin.com/in/rashidarichardson/
Connect With Rashida
Key Highlights
🔄 How societal inequities become amplified through AI systems
⚖️ The challenges of applying existing laws to AI-related issues
🛡️ The importance of proper governance in AI development
💡 Why anti-discrimination laws may not be enough
🌐 The path toward responsible AI development
Episode Timestamps
00:00 - Introduction and welcome
00:44 - Defining algorithmic bias and its sources
02:25 - Civil rights implications of AI
04:23 - Policy interventions and frameworks
06:20 - Corporate governance and internal processes
09:20 - Public awareness and generative AI
12:50 - Legal challenges in AI governance
17:47 - Anti-discrimination laws in the AI era
21:49 - Practical implementation of AI tools
24:51 - Key principles for responsible AI
28:51 - Building effective oversight systems
30:25 - Future outlook and next steps
For this episode of Embedded Zain is joined by Rashida Richardson, a law and technology policy expert and Senior Counsel at MasterCard, where they discuss some of the ways AI systems reflect racial and cultural biases we see in society. Rashida also shares some of the policy interventions, governance, and training needed to tackle challenges like "dirty data" that we see in AI development, and gives her insights on how you can ensure responsible use. The conversation also explores the current legal landscape around AI, and specifically its implications when it comes to civil rights. Some really great insights here you don't want to miss, give it a listen!
About Our Guest
Professor Rashida Richardson is a law and technology policy expert who researches the social and civil rights implications of artificial intelligence and other data-driven technologies. She currently serves as senior counsel, artificial intelligence, at Mastercard. She has also previously served as attorney advisor to the chair of the Federal Trade Commission and senior policy advisor for data and democracy at the White House Office of Science and Technology Policy. Rashida has worked on AI research, policy, governance, and legal issues in academia, civil rights organisations, government, and industry. She also has extensive experience leading interdisciplinary teams and cross-sector collaborations.
Website: https://www.rashidarichardson.com/
LinkedIn: https://www.linkedin.com/in/rashidarichardson/
Connect With Rashida
Key Highlights
🔄 How societal inequities become amplified through AI systems
⚖️ The challenges of applying existing laws to AI-related issues
🛡️ The importance of proper governance in AI development
💡 Why anti-discrimination laws may not be enough
🌐 The path toward responsible AI development
Episode Timestamps
00:00 - Introduction and welcome
00:44 - Defining algorithmic bias and its sources
02:25 - Civil rights implications of AI
04:23 - Policy interventions and frameworks
06:20 - Corporate governance and internal processes
09:20 - Public awareness and generative AI
12:50 - Legal challenges in AI governance
17:47 - Anti-discrimination laws in the AI era
21:49 - Practical implementation of AI tools
24:51 - Key principles for responsible AI
28:51 - Building effective oversight systems
30:25 - Future outlook and next steps