Facebook had been testing with bots that negotiated with each other over the ownership of virtual items.The experiment made by Facebook with artificial intelligence and chatbots was an effort to comprehend how semantics and linguistics played a role for negotiating parties. Most importantly, those bots were encoded to experiment with linguistic to see how that affected their control in the overall discussion.
A few days later, some coverage picked up on the fact that in some of the instances the transfer of messages had gone haywire which at first glance sounded nonsensical:
- Bob: “I can can I I everything else”
- Alice: “Balls have zero to me to me to me to me to me to me to me to me to”
The reason why Facebook Researchers shut down their own AI/chatbots
According to the reports, “The AI did not start shutting down computers worldwide or something of the sort, but it stopped using English and started using a language that it created.”
As technology news site Gizmodo said: “In their attempts to learn from each other, the bots thus began chatting back and forth in derived shorthand – but while it might look creepy, that’s all it was.”
But the way the story has been reported says more about cultural fears and representations of machines than it does about the facts of this particular case.
Plus, let’s face it, robots just make for great villains on the big screen.
In the real world, though, AI is a huge area of research at the moment. The systems currently being designed and tested are increasingly complicated. And hence, some are arguing that incorporation of AI in systems such as autonomous weapons is dangerous.
Interestingly, according to BBC News: “… Facebook’s system was being used for research, not public-facing applications, and it was shut down because it was doing something the team wasn’t interested in studying – not because they thought they had stumbled on an existential threat to mankind.” And, Dhruv Batra, visiting research scientist at Facebook AI Research, in a scathing post called the coverage as ‘clickbaity and irresponsible’ for not portraying a distinction between ‘changing parameters of an experiment’ and ‘unplugging / shutting down AI’.
Anything would you like to add or interpret? Do share your thoughts in the comments box.