"Starting a military AI arms race is a bad idea," says a group of researchers and concerned citizens who are urging a ban on offensive military weapons that don't rely on human control. The group signed an open letter that's being delivered at a conference on artificial intelligence this week.

Organized by the Future of Life Institute, the open letter has an impressive list of signatories, from entrepreneur Elon Musk and theoretical physicist Stephen Hawking to AI researchers at Google, Facebook and numerous colleges.

The letter was the idea of Stuart Russell, the director of the Center for Intelligent Systems at the University of California, Berkeley. He tells NPR's All Things Considered that some of the technology mentioned in the letter is already on its way to becoming part of our world.

  • Playlist
  • Download
  • Embed
    iframe src="http://www.npr.org/player/embed/427189235/427189599" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">

"There are sentry robots in Korea, in the Demilitarized Zone," Russell says. "And those sentry robots can spot and track a human being for a distance of 2 miles — and can very accurately kill that person with a high-powered rifle."

Right now those mechanized sentries have two modes, Russell says. One mode requires a human's approval before it kills, he says, "but if you flip the switch, it's in automatic mode, and will do it by itself."

The idea of robots tracking humans and killing them is "repulsive," Russell says, and it could lead to a backlash against AI research and robots more generally. Instead, he says, artificial intelligence resources should be used to make people's lives better, from driverless cars to helpful personal assistants.

"There are many things we could do other than making better ways to kill people," he says.

Russell and the more than 1,000 other scientists and researchers who signed the FLI letter are urging an international treaty to ban autonomous weapons; he says that one is currently in the works at the United Nations.

The open letter uses forceful tones to set forth concerns that were aired in a similar letter in January, also with the support of Musk and Hawking. The earlier letter spoke of "the great potential of AI," stating that "the eradication of disease and poverty are not unfathomable."

The new letter was officially announced at the International Joint Conference on Artificial Intelligence, currently being held in Buenos Aires, Argentina. It states that the use of autonomous weapons sparks a new argument over warfare: "that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle."

The letter also warns that autonomous weapons aren't like nuclear weapons, since they "require no costly or hard-to-obtain raw materials ... they will become ubiquitous and cheap for all significant military powers to mass-produce."

Russell acknowledges that in today's post-Terminator culture, the ideas of robots and warfare have often been intertwined.

"I think people understand the difference between science fiction and reality," he says. "What we want to avoid is that the reality catches up with the science fiction."

Copyright 2015 NPR. To see more, visit http://www.npr.org/.

300x250 Ad

300x250 Ad

Support quality journalism, like the story above, with your gift right now.

Donate