And while smart machines are already very much a part of modern warfare, the army and its contractors are eager to add more. New robots — none of them particularly human-looking — are being designed to handle a broader range of tasks, from picking off snipers to serving as indefatigable night sentries.
In a mock city used by army Rangers for urban combat training, a 15-inch robot with a video camera scuttles around a bomb factory on a spying mission. Overhead an almost silent drone aircraft with a four-foot wingspan transmits images of the buildings below. Onto the scene rolls a sinister-looking vehicle on tank treads, about the size of a riding lawn mower, equipped with a machine gun and a grenade launcher.
Three backpack-clad technicians, standing out of the line of fire, operate the three robots with wireless video-game-style controllers. One swivels the video camera on the armed robot until it spots a sniper on a rooftop. The machine gun points and fires in two rapid bursts. Had the bullets been real, the target would have been destroyed.
The machines, viewed at a “Robotics Rodeo” last month at the army’s training school here, not only protect soldiers, but also are never distracted, using an unblinking digital eye, or “persistent stare,” that automatically detects even the smallest motion. Nor do they ever panic under fire.
“One of the great arguments for armed robots is they can fire second,” said Joseph W Dyer, a former vice admiral and the chief operating officer of iRobot, which makes robots that clear explosives as well as the Roomba robot vacuum cleaner. When a robot looks around a battlefield, he said, the remote technician who is seeing through its eyes can take time to assess a scene without firing in haste at an innocent person.
Source of controversy
Yet the idea that robots on wheels or legs, with sensors and guns, might someday replace or supplement human soldiers is still a source of extreme controversy. Because robots can stage attacks with little risk to the people who operate them, opponents say that robot warriors lower the barriers to warfare, potentially making nations more trigger-happy.
“Wars will be started very easily and with minimal costs” as automation increases, predicted Wendell Wallach, a scholar at the Yale Interdisciplinary Centre for Bioethics and chairman of its technology and ethics study group.
Civilians will be at greater risk, people in Wallach’s camp argue, because of the challenges in distinguishing between fighters and innocent bystanders. That job is maddeningly difficult for human beings on the ground. It only becomes more difficult when a device is remotely operated.
This problem has already arisen with Predator aircraft, which find their targets with the aid of soldiers on the ground but are operated from the United States. Because civilians in Iraq and Afghanistan have died as a result of collateral damage or mistaken identities, Predators have generated international opposition and prompted accusations of war crimes.
But robot combatants are supported by a range of military strategists, officers and weapons designers — and even some human rights advocates.
“A lot of people fear artificial intelligence,” said John Arquilla, executive director of the Information Operations Centre at the Naval Postgraduate School. “I will stand my artificial intelligence against your human any day of the week and tell you that my AI will pay more attention to the rules of engagement and create fewer ethical lapses than a human force.”