As a robotics industry professional, I love our fine metal friends. They do the jobs humans don’t want to, can’t or shouldn’t do themselves. And they do those nasty and mindless jobs without complaining, taking sick days and holidays, filing for worker’s compensation and going home after a semi-productive 8 hour day. Robots do not search social media websites while at work or place orders on Amazon or Redtag. They are the perfect employees.
Robots are shown how to do more and more difficult jobs every day. As we give them eyes and ears, hands and feet, they can do more challenging and complex tasks and can do them better than human counterparts in many cases. The more brains we give robots, the more they can do. They are coming close to thinking for themselves and developing intuition, imagination, judgement and risk assessment with the advancement of Artificial Intelligence (AI). And herein lies the challenge for our future relationship with our cyborgian counterparts. With AI comes incredible new advances in productivity and range of work robots can accomplish. But without a built-in moral compass, these incredible new beings also present a very real threat to the future existence of human-kind.
I like to say that “whatever men and women can imagine will eventually become reality”. The act of imagination most often comes with some reference to physical laws. Even in dreams of self-flight, we know we should not be able to do that and therefore must be dreaming. Unless we are imagining super-natural (beyond limits of known physics) events, all things we imagine we will invent at some point. Great thinkers like DaVinci, Newton. Einstein and Asimov have shown us the truth in this statement.
Advent of Killer Robots
It is with this history of imagination becoming reality that current great thinkers are very worried about the merger of AI with robots with military technology. The junction of all three technologies leads to autonomous Killer Robots. It is one thing to use drones to remotely project firepower. That technology has its advocates and opponents. But there is a moral judgement that must be made before employing a drone, a cruise missile, or nuclear warhead. The purely logical judgement that amoral and purely rational, AI equipped robots would make might be something completely different than the decision a human, or several humans in a chain of command, might make.
Those who are old enough to remember the 1983 movie, “War Games, will remember it clearly delineates the challenge of allowing computers (AI) to take control of a nuclear war situation. In this movie, with a very young Matthew Broderick as its star, War Games presents the dilemma of putting AI in charge of fighting and winning a war. The computer does not care about the outcome to humans. It is programmed to win. This is a successor movie to the 1964 black comedy, “Dr. Strangelove“, which presented the same moral dilemma with a less positive ending scene than “War Games”. Both movies were created before robots became everyday reality. “Star Wars” and before that TV shows like “Lost In Space”, speculated what would happen when robots worked alongside humans with near human imagination but computation and memory well in excess of human counterparts.
But it was not until the original “Terminator” in 1984, featured Arnold Schwarzenegger that we could see with our eyes the true terror of robots merged with AI and military capability. Subsequent sequels of the Terminator series showed greater and greater deadly capability of these killer robots as our imagination of what could be accomplished by technology continually expanded. The liquid metal robot of Terminator 3 came into view. It is within the laws of physics and chemistry and so is possible. It will happen. The moral dilemmas only compound.
Finally, reality is catching up with imagination and our technology leaders are worried. Very worried. A new commission has been established to try and regulate the application of AI to killer robots capable of military action. Most recently Mark Smith wrote for BBC News an article on this subject titled “Is Killer Robot Warfare Closer than We Think?” In this article he poses the question
Entire regiments of unmanned tanks; drones that can spot an insurgent in a crowd of civilians; and weapons controlled by computerised “brains” that learn like we do, are all among the “smart” tech being unleashed by an arms industry many believe is now entering a “third revolution in warfare”.
“In every sphere of the battlefield – in the air, on the sea, under the sea or on the land – the military around the world are now demonstrating prototype autonomous weapons,” says Toby Walsh, professor of artificial intelligence at Sydney’s New South Wales University.
“New technologies like deep learning are helping drive this revolution. The tech space is clearly leading the charge, and the military is playing catch-up.”
World Technology Leaders React
In the letter describing the “3rd Revolution of Warfare”, Elon Musk and many others such as physicist Steven Hawking warn of the dangers of AI to do just what is predicted in the supposedly fictional setting of Terminator, the creation of a SKYNET that takes over global computers and wages war on humans by the machines. Recently an AI experiment went rogue when two computers created a unique new language that only the two computers could understand. They began a conversation unintelligible to the human designers and so were immediately unplugged. It was the first close call of the 3rd Revolution.
In the letter published on August 21, 2017 the following key statement was endorsed by the signatories:
“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.”
Where Do We Go from Here?
While many are calling for a global ban on killer robots, it is not all that simple a decision. Major world powers with the financial and technology resources to invent and develop killer robots might all agree to such a ban to eliminate the possibility of a Terminator scenario. But once AI and robotics advance, it will be within the reach of rogue states to acquire the technology to build their own killer robots. They can be just as deadly as what major states could develop. The drone armies might be smaller, but stealthily unleashed on a state enemy, could cause chaos and destruction.
So there is a compelling argument to continue development of killer robots to counter the threat of rogue state actors. But great care and control should be exerted by developers. The next decade will present many ethical and moral challenges equal to any that are technical in nature.
Recent Comments