×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Reading can help robots learn ethical behaviour

Last Updated 14 February 2016, 13:05 IST

Scientists have developed a technique that teaches robots ethical behaviour by training them to read human stories, learn acceptable sequences of events and understand successful ways to behave in human societies.

"The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behaviour in fables, novels and other literature," said Mark Riedl, associate professor at at the Georgia Institute of Technology in US.

"We believe story comprehension in robots can eliminate psychotic-appearing behaviour and reinforce choices that won't harm humans and still achieve the intended purpose," Riedl said.

Quixote is a technique for aligning an artificial intelligence (AI)'s goals with human values by placing rewards on socially appropriate behaviour.

It builds upon Riedl's prior research - the Scheherazade system - which demonstrated how artificial intelligence can gather a correct sequence of actions by crowdsourcing story plots from the Internet.

Scheherazade learns what is a normal or "correct" plot graph. It then passes that data structure along to Quixote, which converts it into a "reward signal" that reinforces certain behaviours and punishes other behaviours during trial-and-error learning.

In essence, Quixote learns that it will be rewarded whenever it acts like the protagonist in a story instead of randomly or like the antagonist.

For example, if a robot is tasked with picking up a prescription for a human as quickly as possible, the robot could either rob the pharmacy, take the medicine, and run; or wait in line.

Without value alignment and positive reinforcement, the robot would learn that robbing is the fastest and cheapest way to accomplish its task.

With value alignment from Quixote, the robot would be rewarded for waiting patiently in line and paying for the prescription.

Researchers showed how a value-aligned reward signal can be produced to uncover all possible steps in a given scenario, map them into a plot trajectory tree, which is then used by the robotic agent to make "plot choices" and receive rewards or punishments based on its choice.

The Quixote technique is best for robots that have a limited purpose but need to interact with humans to achieve it, and it is a primitive first step toward general moral reasoning in AI, Riedl said.

"We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behaviour," he said.

"Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual," Riedl said.

ADVERTISEMENT
(Published 14 February 2016, 13:05 IST)

Follow us on

ADVERTISEMENT
ADVERTISEMENT