Artificial intelligence chatbots designed by Facebook started conversing with one another in a language all their own, prompting researchers to shut down the program for fear of losing control over the AI, Digital Journal reports.
Two AI “dialog agents” were being trained to negotiate by researchers at the Facebook Artificial Intelligence Research (FAIR) lab. According to The Atlantic, a report about the training revealed that one of the bots needed to be tweaked because the conversation it was having with other bots “led to divergence from human language as the agents developed their own language for negotiating.” The bots began in English, but then drifted into phrases like “I can i i everything else” and “balls have zero to me to me to me …” The researchers believe that the repetition of “i” and “to me” shows the AI agents working out how many of each item they should take.
“Agents will drift off understandable language and invent codewords for themselves,” Dhruv Batra, a visiting research scientist from Georgia Tech at FAIR, told Fast Co. Design. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
Ultimately, the bots in this project had to be restarted using a fixed supervised model because researchers were unable to follow the bots’ conversation. But don’t start calling Sarah Connor just yet: There’s not enough evidence to suggest that AI-invented languages could give machines the power to overrule their human operators.
However, the possibility of such a scenario may be keeping researchers up at night. “There remains much potential for future work,” the Facebook team writes in their report, “particularly in exploring other reasoning strategies, and in improving the diversity of utterances without diverging from human language.”