
Sign up to save your podcasts
Or


Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.
By Reid Blackman4.9
5454 ratings
Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.

3,979 Listeners

9,556 Listeners

46 Listeners

30,203 Listeners

112,416 Listeners

56,511 Listeners

3,578 Listeners

3,275 Listeners

261 Listeners

5,518 Listeners

450 Listeners

15,938 Listeners

20 Listeners

2 Listeners

9,326 Listeners