Killer robots: weapons out of human control?

Richard Moyes, Article 36

These issues will be discussed at our first cafe discussion event on July 8th, see also http://www.luddites200.org.uk/Drones.html

The Luddite uprisings two hundred years ago highlighted the social and economic impacts of changing technologies of production, and the capacity for technologies to serve as a focal point for social protest.  Recent decades have seen successful international civil society mobilisations, focused around the harmful effects of certain weapons –leading to treaties banning anti-personnel landmines and cluster munitions.  Now there is a Campaign to Stop Killer Robots – calling for a ban on weapons can kill without direct human control

The growing impetus and technological capacity for weapon systems with greater autonomy is one of the emerging challenges regarding the control of weapons internationally.  We already see greater autonomy in the use of tele-operated drones – but where current systems are still directly controlled by a pilot (albeit sitting many miles away) there are plans afoot that could take the human ‘out of the loop’ altogether.

After being deployed on a mission, these so called ‘fully autonomous’ weapons would have the capacity to choose and engage targets without further human intervention.  In the military planning documents of certain governments, the greater degrees of non-lethal autonomy being given to existing systems, and the automatic targeting used in certain weapons already deployed (such as the Harpy anti-radar weapon, the Phalanx ship defence system and sensor fused weapons), non-governmental organisations like Article 36 see the warning signs for a situation where machines are given the power to decide who to kill.

In April 2013, the Campaign to Stop Killer Robots was launched in London to stop this from happening.  Calling for a ban on fully autonomous weapons, the campaign is made up of non-governmental organisations from different countries and different backgrounds working together towards a common goal.  NGOs are not the only actors concerned about this issue.  Also in April, the UN’s Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns released a report to governments at the Human Rights Council in which he called on states to enact national level ‘moratoria’ – to freeze the development of such weapons and allow time for international discussion of the moral challenges they present.

The UK was the only state in the Human Rights Council debate on this report to speak strongly against its findings – rejecting the call for a moratorium and arguing that existing international law is sufficient to control these weapons.  However, in the UK’s House of Commons on 17 June, the government was rather more progressive. Foreign Office Minister, Alistair Burt noted that by the UK Government’s interpretation existing international law applicable to weapons would prevent the development of fully autonomous weapons:

“As I had the chance to read the hon. Lady’s speech before the debate, I noticed that she used the phrase ‘Furthermore, robots may never be able to meet the requirements of international humanitarian law’. She is absolutely correct; they will not. We cannot develop systems that would breach international humanitarian law, which is why we are not engaged in the development of such systems and why we believe that the existing systems of international law should prevent their development.”

By recognising that fully autonomous weapons “will not” be able to meet the requirements of international humanitarian law (and certain analysts argue the opposite), this position provides a significant barrier to the development of such systems.  Furthermore, in a previous statement to parliament 26 March 2013, the government has asserted that “the operation of weapons systems will always be under human control.”

This position from the UK offers some grounds for optimism, but the key challenge now is to press the government to delineate what level of “human control” is considered adequate to ensure weapons meet our moral, legal and policy standards.  The UK already has weapons where the final targeting decision is made by a computer (albeit in very narrow circumstances), so this explanation of what constitutes sufficient human control is a pressing issue.

Article 36 (the NGO) takes its name from article 36 of Additional Protocol I of 1977 to the Geneva Conventions, a legal article that places an obligation on states to review new weapons, means and methods of warfare to ensure that they meet legal obligations.  Bound by this law, the UK undertakes such reviews at a national level, albeit in secret.  Those conducting the reviews would need to have some detailed explanation of the level of human control that is necessary for a weapon to accord with UK policy if these reviews were going to be effective.  It is this delineation, of what constitutes ‘meaningful human control’ that we are now calling on the UK to make public in order to ensure that our rejection of fully autonomous weapons is watertight.

The way in which technologies reshape relationships between people and institutions is profound.  In the case of weapons, changes in technology and in the distribution of technologies continue to recalibrate and restructure how we, as a broad human society, think about and organise the practice of killing each other.  For organisations in the Campaign to Stop Killer Robots, allowing machines to select who lives and dies in a conflict environment crosses a fundamental moral line.  How we delineate the level of human control necessary in relation to individual weapon systems, and individual attacks, will be revealing of how we ensure control over weapons in our society more broadly.


Posts on this blog represent the views of their authors, not of Breaking the Frame, unless otherwise noted.

This entry was posted in Blog and tagged , , , , , , , . Bookmark the permalink.

4 Responses to Killer robots: weapons out of human control?

  1. David King says:

    Hi Richard, I’m looking forward to discussing these issues with you next Monday. Doctor Strangelove doesn’t seem so funny now, does it? It is absolutely extraordinary that one of the first applications of autonomous robotics is not hoover robots but killer robots (hoover robots exist, but they don’t decide when to do the hovering). One would have thought that the advocates of autonomous robots would have started with something more harmless, in order to reassure us, but, there is definitely a logic to robot soldiers. As Gordon Johnson of the Joint Forces Command at the Pentagon (who has clearly modeled his quote on those famous lines from the first Terminator film), said: “They don’t get hungry. They’re not afraid. They don’t forget their orders. They don’t care if the guy next to them has just been shot. Will they do a better job than humans? Yes.” So much for Asimov’s 3 laws of robotics.

    I think that what this shows is that what technocracy, especially in the military sphere, is about is a complete absence of human values and a drive for absolute domination. In fact, you can even see the history of the last 400 years not as a story of humans designing technology for our own (often extremely harmful), purposes, but as a story of an ongoing process of competition for existence between humans and machines, one that the machines are gradually winning. The only criteria for success in this competition is which has the better performance, which is the most efficient. Of course, that is what films like the Terminator series are about and it’s easy to argue that talk of such abstract processes is paranoid and ahistorical: whatever happens is due to the thoughts and decisions of human beings. That’s true at the moment, but examples like this show that, in the minds of certain human beings (ie. the immensely powerful technocrats who dream up ideas like this and then try to implement them), those criteria are the only things that matter. And it’s not just a few Edward Tellers; we can see this becoming a popular movement under the banner of trans- or post-humanism which hopes to upload our brains into computers and lose the body altogether. For those people, the idea that human beings would want to have human bodies is just sentimentality, an attachment to something that is now obsolete. Softer versions of this technophile ideology, the trendy fetish of human enhancement through surgery, drugs, electronic implants and wishing to be a cyborg are even more widespread.

    A final point: I don’t want to exaggerate this idea of a struggle for existence between humans and machines. But when we consider that a crucial cause of the financial crisis was the handing over of decision making over stock market trading to computer algorithms that can process trades in microseconds, and a reliance on putting far greater amounts of money than are reflected in the real economy into computer-generated financial derivatives that no human being can understand, it does make you wonder who or what is really in charge.

  2. HuddsLudds says:

    Re Dave’s comment. The problem of technology is certainly an evolutionary one. But it’s not just competition between Homo.s.s and machines – it’s about the evolutionary trajectory of humans. Do we develop our human characteristics, (hopefully for the better), or do we evolve increasingly as appendages of machines, on which we are reliant and which dominate our behaviour and relations with each other.

  3. HuddsLudds says:

    One thing I would like guidance on from people who know more about computers is the argument for and against AI. Does the Turing test miss the point ? Surely the difference between AI and living things is that the latter can experience emotions – love, empathy, joy, grief etc and in the case of humans this translates into ethical and aesthetic awareness. As I understand it the intelligence of computers relies on the input of information Can these emotional qualities of life be reduced to information ? If they can’t then there’s no contest. AI and humans are totally different beings. AI can be no more than a tool of instrumental reason, incapable of the intuition and imagination which are essential for holisitc human thought and creativity.

  4. Dave King says:

    I agree this is a critical question. I’m not sure that the Turing test would miss the point, because a crucial element of the way we suss out our interlocutor is through overtones and shades of meaning that imply emotional responses. I suspect that the AI people would say that there’s no inherent difference between cognition and emotion: both are information produced by networks of neurons in the brain, using the same electro-chemical elements ie synapses, neurotransmitters and spikes. In Marge Piercy’s book Bodies of Glass she has a robot that has been designed to be empathetic, although she comes down against this approach at the end, it’s still a Golem. Perhaps the point is that the practitioners of AI and cybernetics, following the ethos of their discipline treat instrumental reasoning and calculation as the highest goal and are unlikely to be interested in ‘weakening’ their creations with unpredictable emotions.

Leave a Reply

Your email address will not be published. Required fields are marked *