
Sign up to save your podcasts
Or


Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.
By Reid Blackman4.9
5454 ratings
Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.

30,704 Listeners

15,228 Listeners

9,165 Listeners

10,743 Listeners

5,532 Listeners

56,544 Listeners

9,531 Listeners

4,160 Listeners

7,202 Listeners

9,947 Listeners

3,271 Listeners

14,597 Listeners

596 Listeners

43 Listeners

42 Listeners