Intelligent machines can perform tasks usually done by humans but should they make life and death decisions? Lethal autonomous weapons will be deployed on the battlefields of the future unless a campaign to ban them is successful. Should robots be asked to make moral decisions?