<p>If a robot develops a programming glitch that causes it to kill civilians, do we blame the robot, or the humans who created and deployed it? <br /><br /></p>.<p>Such ethical questions are now emerging as advanced militaries develop autonomous robotic warriors to replace humans on the battlefield. <br /><br />Some argue that robots do not have free will and therefore cannot be held morally accountable for their actions.<br /><br />But psychologists at the University of Washington (UW) are finding that people don’t have such a clear-cut view of humanoid robots. <br /><br />“We’re moving toward a world where robots will be capable of harming humans,” said Peter Kahn, a UW associate professor of psychology who led the study. The paper was recently published in the proceedings of the International Conference on Human-Robot Interaction. <br /><br />“With this study, we’re asking whether a robotic entity is conceptualised as just a tool, or as some form of a technological being that can be held responsible for its actions,” said <br />Kahn, according to a UW statement. <br /><br />Kahn and his team had 40 undergraduate students play a scavenger hunt with a humanlike robot, Robovie. The robot appeared autonomous, but it was remotely controlled by a researcher concealed in another room. <br /><br />After a bit of small talk with the robot, each participant had two minutes to locate objects from a list of items in the room. They all found the minimum, seven, to claim the $20 prize. <br />But when their time was up, Robovie claimed they had found only five objects. Then came the crux of the experiment: participants’ reactions to the robot’s miscount. <br /><br />“Most argued with Robovie,” said co-author Heather Gary, doctoral student in developmental psychology at Washington. “Some accused Robovie of lying or cheating.” <br /><br />When interviewed, 65 percent of participants said Robovie was to blame - at least to a certain degree - for wrongly scoring the scavenger hunt and unfairly denying the participants the $20 prize. <br /><br />This suggests that as robots gain capabilities in language and social interactions, “it is likely that many people will hold a humanoid robot as partially accountable for a harm that it causes,” the researchers wrote. <br /></p>
<p>If a robot develops a programming glitch that causes it to kill civilians, do we blame the robot, or the humans who created and deployed it? <br /><br /></p>.<p>Such ethical questions are now emerging as advanced militaries develop autonomous robotic warriors to replace humans on the battlefield. <br /><br />Some argue that robots do not have free will and therefore cannot be held morally accountable for their actions.<br /><br />But psychologists at the University of Washington (UW) are finding that people don’t have such a clear-cut view of humanoid robots. <br /><br />“We’re moving toward a world where robots will be capable of harming humans,” said Peter Kahn, a UW associate professor of psychology who led the study. The paper was recently published in the proceedings of the International Conference on Human-Robot Interaction. <br /><br />“With this study, we’re asking whether a robotic entity is conceptualised as just a tool, or as some form of a technological being that can be held responsible for its actions,” said <br />Kahn, according to a UW statement. <br /><br />Kahn and his team had 40 undergraduate students play a scavenger hunt with a humanlike robot, Robovie. The robot appeared autonomous, but it was remotely controlled by a researcher concealed in another room. <br /><br />After a bit of small talk with the robot, each participant had two minutes to locate objects from a list of items in the room. They all found the minimum, seven, to claim the $20 prize. <br />But when their time was up, Robovie claimed they had found only five objects. Then came the crux of the experiment: participants’ reactions to the robot’s miscount. <br /><br />“Most argued with Robovie,” said co-author Heather Gary, doctoral student in developmental psychology at Washington. “Some accused Robovie of lying or cheating.” <br /><br />When interviewed, 65 percent of participants said Robovie was to blame - at least to a certain degree - for wrongly scoring the scavenger hunt and unfairly denying the participants the $20 prize. <br /><br />This suggests that as robots gain capabilities in language and social interactions, “it is likely that many people will hold a humanoid robot as partially accountable for a harm that it causes,” the researchers wrote. <br /></p>