AI-based technologies are assuming ever greater importance for militaries around the globe. Governments are investing heavily in these technologies, with the US and China leading the way. After explaining why military planners see such promise in AI, I briefly review the main types of technologies. Ethical reflection in this domain has been heavily focused on systems that are designed to replace human decision-makers in battlefield settings. Debates about Lethal Autonomous Weapon Systems (LAWS) – colloquially called “killer robots” – have grown heated and numerous voices have called for an international ban on their development and use. After reviewing the ethical arguments for and against the deployment of such weapon systems, I consider other AI military applications that have received far less attention, namely systems that are intended to augment human decision-making in battlefield settings. These systems likewise raise ethical challenges, which I discuss in the final part of my presentation. Reliance on AI for the making of life-and-death decisions raises significant issues that should not be ignored, and the attendant risks need to be better understood.