
Sign up to save your podcasts
Or


Every time a new technology that collects, stores, and analyzes our data is released to the world or permitted a new role, we are promised that it will work as intended and won't cause undue harm. But writer, professor, and speaker Dr. Chris Gilliard has found that this is rarely how these stories actually end.
In this discussion with Senior Fellow Arthur Holland Michel, Dr. Gilliard explains why the arc of surveillance technology and novel "artificial intelligence" bends toward failures that disproportionately hurt society's most vulnerable groups, what this means for our notions of "responsible tech" and "AI ethics," and what we can do about it moving forward.
For more, please go to carnegiecouncil.org.
By Carnegie Council for Ethics in International Affairs4.4
5959 ratings
Every time a new technology that collects, stores, and analyzes our data is released to the world or permitted a new role, we are promised that it will work as intended and won't cause undue harm. But writer, professor, and speaker Dr. Chris Gilliard has found that this is rarely how these stories actually end.
In this discussion with Senior Fellow Arthur Holland Michel, Dr. Gilliard explains why the arc of surveillance technology and novel "artificial intelligence" bends toward failures that disproportionately hurt society's most vulnerable groups, what this means for our notions of "responsible tech" and "AI ethics," and what we can do about it moving forward.
For more, please go to carnegiecouncil.org.

2 Listeners

4,185 Listeners

9,187 Listeners

4,274 Listeners

604 Listeners

309 Listeners

327 Listeners

611 Listeners

209 Listeners

712 Listeners

112,758 Listeners

287 Listeners

794 Listeners

1,337 Listeners

16 Listeners

16,042 Listeners

438 Listeners