
Sign up to save your podcasts
Or
Are we ready to let AI drive humanitarian solutions or are we rushing toward an ethical disaster? In this episode of Humanitarian Frontiers in AI, host Chris Hoffman is joined by AI experts Emily Springer, Mala Kumar, and Suzy Madigan to tackle the pressing question of accountability when AI systems cause harm and how to ensure that AI truly serves those who need it most. Together, they discuss the difference between AI ethics and responsible AI, the dangers of rushing AI pilots, the importance of AI literacy, and the need for inclusive, participatory AI systems that prioritize community wellbeing over box-ticking for compliance. Emily, Mala, and Suzy also emphasize the importance of collaboration with the Global South and address the funding gaps that typically hinder progress. The panel argues that slowing down is crucial for building the infrastructure, governance, and ethical frameworks needed to ensure AI delivers a sustainable and equitable impact. Be sure to tune in for a thought-provoking conversation on balancing innovation with responsibility and shaping AI as a force for good in humanitarian action!
Key Points From This Episode:
Links Mentioned in Today’s Episode:
Emily Springer on LinkedIn
Emily Springer Advisory
The Inclusive AI Lab by Emily Springer
Mala Kumar
Mala Kumar on LinkedIn
ML Commons
Suzy Madigan on LinkedIn
Suzy Madigan on X
The Machine Race by Suzy Madigan
FCDO Call for Humanitarian Action and Responsible AI Research
ML Commons AI Safety Benchmark
‘Collective Constitutional AI: Aligning a Language Model with Public Input’
Nasim Motalebi
Nasim Motalebi on LinkedIn
Chris Hoffman on LinkedIn
Are we ready to let AI drive humanitarian solutions or are we rushing toward an ethical disaster? In this episode of Humanitarian Frontiers in AI, host Chris Hoffman is joined by AI experts Emily Springer, Mala Kumar, and Suzy Madigan to tackle the pressing question of accountability when AI systems cause harm and how to ensure that AI truly serves those who need it most. Together, they discuss the difference between AI ethics and responsible AI, the dangers of rushing AI pilots, the importance of AI literacy, and the need for inclusive, participatory AI systems that prioritize community wellbeing over box-ticking for compliance. Emily, Mala, and Suzy also emphasize the importance of collaboration with the Global South and address the funding gaps that typically hinder progress. The panel argues that slowing down is crucial for building the infrastructure, governance, and ethical frameworks needed to ensure AI delivers a sustainable and equitable impact. Be sure to tune in for a thought-provoking conversation on balancing innovation with responsibility and shaping AI as a force for good in humanitarian action!
Key Points From This Episode:
Links Mentioned in Today’s Episode:
Emily Springer on LinkedIn
Emily Springer Advisory
The Inclusive AI Lab by Emily Springer
Mala Kumar
Mala Kumar on LinkedIn
ML Commons
Suzy Madigan on LinkedIn
Suzy Madigan on X
The Machine Race by Suzy Madigan
FCDO Call for Humanitarian Action and Responsible AI Research
ML Commons AI Safety Benchmark
‘Collective Constitutional AI: Aligning a Language Model with Public Input’
Nasim Motalebi
Nasim Motalebi on LinkedIn
Chris Hoffman on LinkedIn