by Nicholas West
Ethics is very often the final concern of science, especially where military endeavors are concerned.
Drones and robots are finally becoming front page news after a series of warnings from prominent scientists and researchers who are beginning to see some of the darker side to what is being unleashed upon humanity.
Perhaps the greatest sign yet that we are on the cusp of a frightening tipping point in robotic warfare is that the military itself is engaging scientists and ethicists to build parameters for how far robot evolution will be permitted to flourish.
Weaponized drones are proliferating across the planet a rapid pace, which has led military researchers to conclude that all countries will have armed drones within 10 years. Coupled with this are advancements in robotics and artificial intelligence that literally aim to give life and autonomy to our robotic creations. There is a movement afoot in the area of artificial intelligence that is even introducing survival of the fittest to robots in an effort to create a rival to nature.
Human rights organizations, non-profit groups, and even some universities like Cambridge have been vocal for some time about the threat of “terminator robots.” They have largely been shouted down by the corporate-military complex as Luddites who just can’t comprehend the wonders of science and the vast potential of cooperating with and/or merging with machines. Futurists such as Ray Kurzweil, a director of engineering at Google, only see an inevitable transcendental age of Spiritual Machines where the next stage of human evolution increasingly incorporates a mechanized component to strengthen resilience and perhaps even provide immortality.
This wave of new technology has already arrived in the medical field with DNA nanobots, the creation of synthetic organisms and other genies lying in wait to break the bottle. These developments are a fundamental transformation in our relationship to the natural world and must be addressed with the utmost application of the precautionary principle.
So far that has not happened, but prominent scientists such as Stephen Hawking and those who work in the field of artificial intelligence are beginning to speak out about another side to these advancements that could usher in “unintended consequences.”
It seems that the military is beginning to respond.
The US Department of Defense, working with top computer scientists, philosophers, and roboticists from a number of US universities, has finally begun a project that will tackle the tricky topic of moral and ethical robots. This multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — the ability to choose right from wrong. As we move steadily towards a military force that is populated by autonomous robots — mules, foot soldiers, drones — it is becoming increasingly important that we give these machines — these artificial intelligences — the ability to make the right decision. Yes, the US DoD is trying to get out in front of Skynet before it takes over the world. How very sensible.
This project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military R&D.
Eventually, of course, this moralistic AI framework will also have to deal with tricky topics like murder. Is it OK for a robot soldier to shoot at the enemy? What if the enemy is a child? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans, or will they be held to a higher standard?
The commencement of this ONR project means that we will very soon have to decide whether it’s okay for a robot to take the life of a human…
One could argue that assigning the military to be the arbiter of what morality is might be the ultimate oxymoron. Moreover, this has all of the trappings of the drone “problem” where unchecked proliferation is now being “solved” by the very same entities who see the only solution as increased proliferation, but with a bit more discretion.
So far, the military-industrial complex has spent countless millions to create an ever-increasing catalog of humanoid robots and the artificial intelligence to equip them with decision-making capability, not to mention a fleet of drones that could begin to swarm on its own. It’s highly unlikely that this trend will be reversed.
Furthermore, we need transparency about who the ethicists will be that are helping to give guidance about morality. Just because one calls themselves an ethicist, or has a title from a major university, does not rule out psychopathy. For just one example, please read this article about a university ethicist who believes in life extension only as a means to offer eternal torment to those deemed by the justice system to be the very worst criminals. Imagine handing over full power to robots to make that decision.
Nevertheless, at least the discussion is finally on the table out in the open. So much so, that the subject of killer robots is now up for debate at the United Nations in Geneva:
Two robotics experts, Prof Ronald Arkin and Prof Noel Sharkey, will debate the efficacy and necessity of killer robots.
The meeting will be held during the UN Convention on Certain Conventional Weapons (CCW).
A report on the discussion will be presented to the CCW meeting in November.
This will be the first time that the issue of killer robots, or lethal autonomous weapons systems, will be addressed within the CCW.
The meeting of experts will be chaired by French ambassador Jean-Hugues Simon-Michel from 13 to 16 May 2014.
Despite the concern that U.N. involvement could be a convenient way to internationalize robotics efforts in the same ways that drone treaties have been proposed, which only serve to put the U.S. in the lead to dictate to all countries, this is a positive step toward mass awareness of the issue. Professor Noel Sharkey in particular has been a leading voices calling for more debate … and quickly.
Let’s hope it is not already too late. Now is probably the final opportunity to learn as much as possible about what is being established, and to share this with family and friends and become engaged. It is not hyperbole to suggest that this is humanity’s final opportunity to remain fully human.