
Sign up to save your podcasts
Or


Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.
By Reid Blackman4.9
5454 ratings
Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.

4,022 Listeners

9,724 Listeners

46 Listeners

30,233 Listeners

113,121 Listeners

56,944 Listeners

3,620 Listeners

3,358 Listeners

263 Listeners

5,576 Listeners

465 Listeners

16,525 Listeners

19 Listeners

3 Listeners

9,438 Listeners