How Facebook's AI Bots Learned Their Own Language and How to Lie

08_18_Bots_01
AI researchers say learning to overcome the CAPTCHA system is a "hallmark of human intelligence." Isaac Lawrence/AFP/Getty

Facebook has been working on artificial intelligence that claims to be great at negotiating, makes up its own language and learns to lie.

OMG! Facebook must be building an AI Trump! "Art of the deal. Biggest crowd ever. Cofveve. Beep-beep!"

This AI experiment comes out of a lab called Facebook Artificial Intelligence Research. It recently announced breakthrough chatbot software that can ruthlessly negotiate with other software or directly with humans. Research like that usually gets about as much media attention as a high school math bee, but the FAIR project points toward a bunch of intriguing near-term possibilities for AI while raising some creepy concerns—like whether it will be kosher for a bot to pretend it is human once bots get so good you can't tell whether they're code or carbon.

AI researchers around the world have been working on many of the complex aspects of negotiation because it is so important to technology's future. One of the long-held dreams for AI, for example, is that we'll all have personal bot-agents we can send out into the internet to do stuff for us, like make travel reservations or find a good plumber. Nobody wants a passive agent that pays retail. You want a deal. Which means you want a badass bot.

There are so many people working on negotiating AI bots that they even have their own Olympics—the Eighth International Automated Negotiating Agents Competition gets underway in mid-August in Melbourne, Australia. One of the goals is "to encourage design of practical negotiation agents that can proficiently negotiate against unknown opponents in a variety of circumstances." One of the "leagues" in the competition is a Diplomacy Strategy Game. AI programmers are anticipating the day when our bot wrangles with Kim Jong Un's bot over the fate of the planet while Secretary of State Rex Tillerson is out cruising D.C. on his Harley.

As the Facebook researchers point out, today's bots can manage short exchanges with humans and simple tasks like booking a restaurant, but they aren't able to have a nuanced give-and-take that arrives at an agreed-upon outcome. To do that, AI bots have to do what we do: make a mental model of the opponent, anticipate reactions, read between the lines, communicate in fluent human language and even throw in a few bluffs. Facebook's AI had to figure out how to do those things on its own: The researchers wrote machine-learning software, then let it practice on both humans and other bots, constantly improving its methods.

This is where things got a little weird. First of all, most of the humans in the practice sessions didn't know they were chatting with bots. So the day of identity confusion between bots and people is already here. And then the bots started getting better deals as often as the human negotiators. To do that, the bots learned to lie. "This behavior was not programmed by the researchers," Facebook wrote in a blog post, "but was discovered by the bot as a method for trying to achieve its goals." Such a trait could get ugly, unless future bots are programmed with a moral compass.

The bots ran afoul of their Facebook overlords when they started to make up their own language to do things faster, not unlike the way football players have shorthand names for certain plays instead of taking the time in the huddle to describe where everyone should run. It's not unusual for bots to make up a lingo that humans can't comprehend, though it does stir worries that these things might gossip about us behind our back. Facebook altered the code to make the bots stick to plain English. "Our interest was having bots who could talk to people," one of the researchers explained.

facebook
The bots ran afoul of their Facebook overlords when they started to make up their own language to do things faster. Dado Ruvic/Reuters

Outside of Facebook, other researchers have been working to help bots comprehend human emotions, another important factor in negotiations. If you're trying to sell a house, you want to model whether the prospective buyer has become emotionally attached to the place so you can crank up the price. Rosalind Picard of the Massachusetts Institute of Technology has been one of the leaders in this kind of research, which she calls affective computing. She even started a company, Affectiva, that's training AI software in emotions by tracking people's facial expressions and physiological responses. It has been used to help advertisers know how people are reacting to their commercials. One Russian company, Tselina Data Lab, has been working on emotion-reading software that can detect when humans are lying, potentially giving bot negotiators an even bigger advantage. Imagine a bot that knows when you're lying, but you'll never know when it is lying.

While many applications of negotiating bots—like those personal-assistant AI agents—sound helpful, some seem like nightmares. For instance, a handful of companies are working on debt-collection bots. Describing his company's product, Ohad Samet, CEO of debt-collection AI maker TrueAccord, told American Banker , "People in debt are scared, they're angry, but sometimes they need to be told, 'Look, this is the debt and this is the situation, we need to solve this.' Sometimes being too empathetic is not in the consumer's best interest." It sounds like his bots are going to "negotiate" by saying, "Pay up, plus 25 percent compounded daily, or we make you part of a concrete bridge strut."

Put all of these negotiation-bot attributes together and you get a potential monster: a bot that can cut deals with no empathy for people, says whatever it takes to get what it wants, hacks language so no one is sure what it's communicating and can't be distinguished from a human being. If we're not careful, a bot like that could rule the world.