You may have read a story about Facebook shutting down an artificial intelligence because it got too intelligent. I first came across this on my Facebook newsfeed last week, and was suspicious of it: surely if this news is as big as it seems, I’d have already heard about it. So I ignored it, dismissing it as clickbait, until I saw the same story presented in a more reasonable manner on a respectable website. So I read the article, which told a more plausible story. Apparently, Facebook had develop chatbots to negotiate over virtual items. They’d been programmed with the ability to experiment with language in order to see if this could help them to dominate the discussions.
Seemingly, one morning the researchers checked on what the chatbots were up to, only to find them chatting in apparently incomprehensibly gibberish such as:
Bob: I can can I I everything else
Alice: Balls have zero to me to me to me to me to me to me to me to me to
However, the researchers soon realised that there was a method to this apparent madness. The AIs had simply developed linguistic shortcuts in order to make their discussions more efficient. Basically, just as we do in everyday life, but often particularly in trade. London traders were famed for their shorthand, for example. So if the chatbots were just doing something fairly logical that we also do, why were people so intrigued by this story?
Mainly, it’s due to sensationalist journalism. Most articles reported that the intelligences involved were robots, despite the fact that they were virtual intelligences, basically computer programmes. You could say that the “journalists” involved were confused by the term chatbot, but I think it’s more likely that they simply wanted to scare people. Two computer programmes wittering away at each other isn’t very worrying. It seems too abstract. But actual physical robots possibly plotting together? That’s scary! We’ve long feared the idea of artificial life we’ve created turning against us. You can see this in medieval tales of homunculi, through Frankenstein and more modern science fiction such as The Terminator and Battlestar Galactica.
I also don’t think we can ignore the role of language here. It’s one thing to imagine artificial intelligences plotting together against us. But doing so in a language we can’t understand? Again, that’s much scarier. It would mean them taking something we’ve developed, speech, and using it against us. And with the image of them being humanoid robots (thanks to the reports), that becomes disturbingly uncanny, as they simultaneously resemble us but don’t.
And them speaking in a language we don’t understand also makes us feel vulnerable: they could be talking about us and we’d never know! There’s something inherently uncomfortable about people speaking in another language in our company, isn’t there? Even if we logically know there’s nothing wrong with it, we often can’t help but feel that we’re being excluded, or that they might be saying something bad about us. Those bloody robots could be plotting to exterminate us, or laughing at our haircuts, and we’d never know!
This feeling has often seen itself magnified in a broader, political sense. In colonial situations, the colonial powers have often attempted to suppress indigenous languages. Partly to foster a cultural as well as political dominance, but also to prevent such languages’ use as a revolutionary tool. If you speak a language that your colonisers can’t then, that makes it easier to plot against them. This is partly why in the 19th century the British administration sought to suppress use of the Irish language, and why to this day English is by far the most commonly spoken language in the country, despite independence.
Speaking a language can be a revolutionary act. So while we don’t need to worry about Facebook’s AIs, maybe listen closely to your Roomba, and if you seem to hear a disgruntled sigh as it passes your WiFi router, don’t take any chances and get out of there: it won’t be long till the missiles start flying.