Telling Stories to Data: How to Teach Computers to Be More Ethical
This is a writing sample from Scripted writer Mark Tweedy
The world's economic leaders at Davos were worried about it. Stephen Hawking is still worried about it, and everyone seems to agree that he's pretty smart. Elon Musk has already invested a billion dollars in restraining runaway AI, which he called "more dangerous than nukes." What they are all worried about is when AI will become self aware and whether it will see humanity as a earlier skin to be discarded. Siri may be mostly laughable now, but how likely is it that GLaDOS will be one of her children? Bill Gates has also planted his stake on these grounds, stating, "I am in the camp that is concerned about super intelligence.... I agree with Elon Musk and some others on this and don't understand why some people are not concerned." Shouldn't you be concerned? The simple answer is that it's either way too early or way too late. This month, researchers in the robotics lab at Georgia Tech announced that they in the "too early" camp. They are betting the future of humanity on the hope that there's still time to teach ethical decision-making to AI, using a technology that's roughly 80,000 years old. You're never going to want paperclips again Although some at the World Economic Forum were explicitly calling for a ban on autonomous weaponry, a wider issue is that catastrophically destructive AI could easily emerge from the most innocent of subroutines. This is best known as the "Paperclip Problem." Oxford philosophy professor Nick Bostrom introduced this thought experiment as a way of starting a dialog about the unfathomable ethics of potential AI. What if, Bostrom wondered, a superintelligent program were tasked with making paperclips? It might devote all of its resources to maximizing efficiency and productivity "with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities." We would be raw materials, and not very useful ones. Of course that's absurd, but the point is we don't have any idea what it will prioritize, and it probably won't be the sorts of things that we would. That's where Georgia Tech comes in. With funding from DARPA, associate professor Mark Riedl and his team at the School of Interactive Computing are teaching morality to robots using stories. Riedl explained, "The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature. We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won't harm humans and still achieve the intended purpose." Perhaps not ironically, his program is called "Quixote," after the famous dreamer of impossible dreams. It is an updated version of Reidl's earlier program named "Scheherazade," after the woman who put off her own beheading one night at a time by telling cliff-hangers. To characters like Captain Kirk, who made a career out of destroying dystopian AI programs with disturbing regularity, this might seem entirely too primitive. However, there is some precedent. The 80,000 year old solution Storytelling most probably arose alongside spoken language, because humans can only tell stories about the world when they attempt to understand it. Jonathan Gottschall's book suggested that stories can make humans more ethical, because "the constant firing of our neurons in response to fictional stimuli strengthens and refines the neural pathways that lead to skillful navigation of life's problems." One of those problems that stories are especially good at, Gottschall reported, is developing empathy. Can it work as well on transistors? Perhaps we'll all find out together when Flagship2020 comes online and into awareness just four years from now.