Little Help From My Friends: Skype Gives Facebook’s Robot Lessons in Being Human
Photo: PixabayTech22:13 05.09.2017(updated 22:19 05.09.2017) Get short URL
Though Skype is more well known as a communications service connecting families and business partners, the 12-year-old system is also providing artificial intelligence training in the hopes of creating more human-like robots.
Facebook’s artificial intelligence lab in Menlo Park, California, developed a robot called “Learn2Smile” that analyzes the facial expressions of a person it’s chatting with and adjusts its own facial responses depending on the conversation.
“Even though the appearances of individuals in our dataset differ, their expressions share similarities which can be extracted from the configuration of their facial landmarks,” Will Feng, the lead researcher in the experiment, said in a statement.
Driven by an algorithm, researchers had the robot watch 250 Skype chat videos that on topics such as personal fitness and wellbeing, study-abroad experiences and spirituality. The goal for the bot was to zero-in on the expressions one person made in the two-person chat videos, and then examine how the second person’s face shifted in response during the conversation.
The algorithm researchers created split the human face into 68 “facial landmarks” and noted how each part of someone’s face changed, reviewing even the most subtle differences.
“For example, when people cringe, the configuration of their eyebrows and mouth is most revealing about their emotional state… small variations in expression can be very informative,” Feng explained.
After the robot watched and analyzed the first set of videos, it watched footage of a person talking and was made to choose the most appropriate facial response. If a person were “laughing” or “cringing,” the robot would either open its mouth or move its head to the side.
As a final test to see if Learn2Smile’s response was adequate and natural, experts assembled a team to watch the robot responding to the videos. According to the study, the panel found that the bot’s reactions were “natural and consistent.”
The system will be presented in late September at the International Conference on Intelligent Robots and Systems in Vancouver, Canada.
According to Feng, since non-verbal cues depend on more than just a person’s facial expressions, the team plans on testing how the selection of certain vocabulary words and how someone’s “changing mental state” can alter facial expressions.