by Brooks Hays
Atlanta (UPI) Feb 17, 2016
Researchers at Georgia Tech are attempting to give robots a sense of right and wrong by teaching them fairy tales.
The teaching is done via programming. Mark Riedl and Brent Harrison, researchers at Georgia Tech's School of Interactive Computing, have developed software that allows robots to read fables and glean proper sequences of events.
The software empowers robots to recall relevant and socially appropriate sequences when responding to real human behaviors and interactions.
"The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature," Riedl, associate professor and director of the Entertainment Intelligence Lab, said in a press release.
"We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won't harm humans and still achieve the intended purpose."
The software, called Quixote, builds on a previous system designed by Riedl. The Scheherazade system featured an algorithm enable artificial intelligence to recognize socially acceptable action sequences through story plots crowdsourced from the Internet.
The system is able to analyze and code event sequences as acceptable or not acceptable and link them with programmed reward signals to encourage ethical behavior and punish antisocial actions.
The goal of systems like Quixote is to mesh programmable goals and actions with human values.
In their latest paper on the topic, Riedl and Harrison show that given a scenario and programmable goal, robots can use Quixote to consider several courses of action and then select those that most align with socially acceptable sequences.
For example, a robot programmed to retrieve money from a bank might realize robbing a bank is the fastest way to gain access to large amounts of cash. But Quixote would empower the robot to opt for the more socially acceptable behavior -- waiting in line at the ATM.
Quixote isn't a fully fledged moral compass in computer form, but a primitive start in promoting ethical behaviors in robots.
"We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior," Riedl said. "Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual."
All about the robots on Earth and beyond!
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2017 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|