12px13px15px17px
Date:01/08/17

Facebook shuts down controversial chatbot experiment after AIs develop their own language to talk to each other

Facebook has shut down a controversial chatbot experiment that saw two AIs develop their own language to communicate. 
 
The social media firm was experimenting with teaching two chatbots, Alice and Bob, how to negotiate with one another.
 
However, researchers at the Facebook AI Research Lab (FAIR) found that they had deviated from script and were inventing new phrases without any human input.
 
The bots were attempting to imitate human speech when they developed their own machine language spontaneously - at which point Facebook decided to shut them down.  
 
'Our interest was having bots who could talk to people,' Mike Lewis of Facebook's FAIR programme told Fast Co Design. 
 
Facebook's Artificial Intelligence Researchers (Fair) were teaching the chatbots, artificial intelligence programs that carry out automated one to one tasks, to make deals with one another.
 
As part of the learning process they set up two bots, known as a dialog agents, to teach each other about human speech using machine learning algorithms. 
 
The bots were originally left alone to develop their conversational skills.
 
When the experimenters returned, they found that the AI software had begun to deviate from normal speech.
 
Instead, they were using a brand new language created without any input from their human supervisors. 
 
The new language was more efficient for communication between the bots, but was not helpful in achieving the task they had been set.
 
 'Agents will drift off understandable language and invent codewords for themselves,' Dhruv Batra, a visiting research scientist from Georgia Tech at Facebook AI Research (FAIR) told Fast co.
 
'Like if I say 'the' five times, you interpret that to mean I want five copies of this item. This isn't so different from the way communities of humans create shorthand.'
 
The programmers had to alter the way the machines learned language to complete their negotiation training.
 
Writing on the Fair blog, a spokesman said: 'During reinforcement learning, the agent attempts to improve its parameters from conversations with another agent.
 
'While the other agent could be a human, Fair used a fixed supervised model that was trained to imitate humans. 'The second model is fixed, because the researchers found that updating the parameters of both agents led to divergence from human language as the agents developed their own language for negotiating.'




Views: 675

©ictnews.az. All rights reserved.

Facebook Google Favorites.Live BobrDobr Delicious Twitter Propeller Diigo Yahoo Memori MoeMesto






22 December 2024

21 12 2024