
Sign up to save your podcasts
Or


Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.
By Reid Blackman4.9
5353 ratings
Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.

26,388 Listeners

5 Listeners

607 Listeners

87,159 Listeners

112,433 Listeners

59,394 Listeners

5,477 Listeners

553 Listeners

488 Listeners

5,475 Listeners

14,407 Listeners

16,083 Listeners

31 Listeners

10,800 Listeners

560 Listeners