
Sign up to save your podcasts
Or


Every time a new technology that collects, stores, and analyzes our data is released to the world or permitted a new role, we are promised that it will work as intended and won't cause undue harm. But writer, professor, and speaker Dr. Chris Gilliard has found that this is rarely how these stories actually end.
In this discussion with Senior Fellow Arthur Holland Michel, Dr. Gilliard explains why the arc of surveillance technology and novel "artificial intelligence" bends toward failures that disproportionately hurt society's most vulnerable groups, what this means for our notions of "responsible tech" and "AI ethics," and what we can do about it moving forward.
For more, please go to carnegiecouncil.org.
By Carnegie Council for Ethics in International Affairs4.4
5959 ratings
Every time a new technology that collects, stores, and analyzes our data is released to the world or permitted a new role, we are promised that it will work as intended and won't cause undue harm. But writer, professor, and speaker Dr. Chris Gilliard has found that this is rarely how these stories actually end.
In this discussion with Senior Fellow Arthur Holland Michel, Dr. Gilliard explains why the arc of surveillance technology and novel "artificial intelligence" bends toward failures that disproportionately hurt society's most vulnerable groups, what this means for our notions of "responsible tech" and "AI ethics," and what we can do about it moving forward.
For more, please go to carnegiecouncil.org.

2 Listeners

4,113 Listeners

3,447 Listeners

781 Listeners

617 Listeners

306 Listeners

209 Listeners

724 Listeners

837 Listeners

152 Listeners

15 Listeners

2,592 Listeners

143 Listeners

16,525 Listeners

218 Listeners

496 Listeners

266 Listeners