Autonomous weapons have the potential to sweep the world in just a few years, and thousands of experts have called upon the public to prevent deadly robots from getting out of control.
Artificial Intelligence (AI) technology has the potential to be deployed as autonomous weapons within just a few years, and Elon Musk, Stephen Hawking, and thousands of AI researchers are certainly on edge about it. In fact, they all signed an open letter to the public encouraging the ban of AI weapons.
The open letter, announced at the International Joint Conferences on Artificial Intelligence (IJCAI) on July 28, 2015, informed, “Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.”
|Stephen Hawking (right) has co-signed the open letter along with Elon Musk and thousands of AI researchers. Photo: Pete Souza - White House Photostream|
The primary concern in the report is that autonomous weapons will be able to make their own decisions instead of being controlled by humans. If AI weapons act to eliminate people who meet certain “pre-defined criteria,” the probability of accidentally killing someone who also seems to meet said criteria exponentially rises.
Arguments in favor of AI weapons often maintain that they could replace human soldiers on the battlefield and reduce casualties. But the open letter describes autonomous weapons as the third revolution in warfare, succeeding gunpowder and nuclear arms. The stakes are strikingly high, especially if AI weapons fall in the hands of the wrong people.
Unlike nuclear weapons, autonomous weapons don’t require materials that are expensive or hard to come by. This would make it easy for essentially any military power to mass-produce AI weapons. It wouldn’t be long before they flooded the black markets, finding their way into the hands of terrorists or warlords. If nothing is done about the progression toward autonomous weapons, it will incite an arms race similar to the one for the atomic bomb.
According to the AI researchers, we have the power to avoid a future AI arms race, but we have to spread enough awareness of the problem to urge lawmakers to implement preventative policies now.
There are countless innovative and beneficial ways that AI can serve society in the future, and researchers involved in the field don’t want their work to be tainted with the mass-production of global warfare. Just like the many chemists and biologists who have supported treaties banning chemical and biological warfare, AI researchers hope their research will progress society to a better world.
If autonomous weapons spiral out of control and start taking innocent lives, it will provoke strong public sentiment against the development of the AI field. The potential for AI to benefit humanity in a number of ways will be significantly narrowed.
For clarification purposes, there isn’t an eminent crisis of autonomous robots flooding the world and wiping out humanity— for now. But the scientists who understand the potential of AI weapons don’t have authority over lawmakers, so all they can do is inform the public and hope to incite widespread knowledge, outrage, and even fear of what could come. Most of us are so occupied with worrying about how to get through the week that we don’t give enough thought to the issues that could gravely affect humanity in the near future.
To get involved, sign the petition here.