
Sign up to save your podcasts
Or
We're joined by the US Science Envoy for AI, Dr.Rumman Chowdhury, who's a leading expert in responsible AI. We uncover the ethical, technical, and societal implications of artificial intelligence.
As AI rapidly eats up the world, the question is what happens when it doesn’t align with human values? How do we navigate the risks of bias, misinformation, and hallucination in AI systems?
Dr. Chowdhury has been at the forefront of AI governance, red teaming, and AI risk mitigation. She has worked with global institutions, governments, and tech companies to make AI more accountable, safe, and equitable.
From her time at Twitter’s (now X) Machine Learning Ethics Transparency and Accountability team to founding Humane Intelligence, she has actively shaped policies that determine how AI interacts with human society.
We dive deep into:
- AI bias, disinformation, and manipulation: How AI models inherit human biases and what we can do about it.
- Hallucinations in AI: Why generative AI models fabricate information and why it’s not a bug but a feature.
- AI governance and regulation: Why unchecked AI development is dangerous, and the urgent need for independent audits.
- The risks of OpenAI, Meta, and big tech dominance: Who is really in control of AI, and how can we ensure fair oversight?
- How companies should approach AI ethics: Practical strategies businesses can use to prevent harm while innovating responsibly.
Key Takeaways from the Episode:
1. AI as a Tool, Not a Mind:
2. Why AI Hallucinations Are Unavoidable:
3. The Hidden Biases in AI Models:
4. The Illusion of AI Objectivity:
5. The Need for AI Red Teaming & Auditing:
6. OpenAI and the Power Problem:
7. Why AI Needs More Public Oversight:
8. The Role of Governments vs. Private AI Firms:
Timestamps:
(00:00) - Introduction to Dr. Rumman Chowdhury and AI ethics
(03:03) - Why AI is just a tool (and how it’s being misused)
(04:58) - The difference between machine learning, deep learning, and generative AI
(07:43) - Why AI hallucinations will never fully go away
(11:46) - AI misinformation and the challenge of verifying truth
(13:26) - The ethical risks of OpenAI and Meta’s control over AI
(18:20) - The role of red teaming in stress-testing AI models
(30:26) - Should AI be treated as a public utility?
(35:43) - Government vs. private AI oversight—who should regulate AI?
(37:22) - The case for third-party AI audits
(53:51) - The future of AI governance and accountability
(61:03) - Closing thoughts and how AI can be a force for good
Join us in this deep dive into the world of AI ethics, accountability, and governance with one of the field’s top leaders.
Follow our host (@iwaheedo) for more insights on technology, civilization, and the future of AI.
4.3
33 ratings
We're joined by the US Science Envoy for AI, Dr.Rumman Chowdhury, who's a leading expert in responsible AI. We uncover the ethical, technical, and societal implications of artificial intelligence.
As AI rapidly eats up the world, the question is what happens when it doesn’t align with human values? How do we navigate the risks of bias, misinformation, and hallucination in AI systems?
Dr. Chowdhury has been at the forefront of AI governance, red teaming, and AI risk mitigation. She has worked with global institutions, governments, and tech companies to make AI more accountable, safe, and equitable.
From her time at Twitter’s (now X) Machine Learning Ethics Transparency and Accountability team to founding Humane Intelligence, she has actively shaped policies that determine how AI interacts with human society.
We dive deep into:
- AI bias, disinformation, and manipulation: How AI models inherit human biases and what we can do about it.
- Hallucinations in AI: Why generative AI models fabricate information and why it’s not a bug but a feature.
- AI governance and regulation: Why unchecked AI development is dangerous, and the urgent need for independent audits.
- The risks of OpenAI, Meta, and big tech dominance: Who is really in control of AI, and how can we ensure fair oversight?
- How companies should approach AI ethics: Practical strategies businesses can use to prevent harm while innovating responsibly.
Key Takeaways from the Episode:
1. AI as a Tool, Not a Mind:
2. Why AI Hallucinations Are Unavoidable:
3. The Hidden Biases in AI Models:
4. The Illusion of AI Objectivity:
5. The Need for AI Red Teaming & Auditing:
6. OpenAI and the Power Problem:
7. Why AI Needs More Public Oversight:
8. The Role of Governments vs. Private AI Firms:
Timestamps:
(00:00) - Introduction to Dr. Rumman Chowdhury and AI ethics
(03:03) - Why AI is just a tool (and how it’s being misused)
(04:58) - The difference between machine learning, deep learning, and generative AI
(07:43) - Why AI hallucinations will never fully go away
(11:46) - AI misinformation and the challenge of verifying truth
(13:26) - The ethical risks of OpenAI and Meta’s control over AI
(18:20) - The role of red teaming in stress-testing AI models
(30:26) - Should AI be treated as a public utility?
(35:43) - Government vs. private AI oversight—who should regulate AI?
(37:22) - The case for third-party AI audits
(53:51) - The future of AI governance and accountability
(61:03) - Closing thoughts and how AI can be a force for good
Join us in this deep dive into the world of AI ethics, accountability, and governance with one of the field’s top leaders.
Follow our host (@iwaheedo) for more insights on technology, civilization, and the future of AI.