Facial microexpressions and algorithms of emotions recognition in the Big Data era. It’s something about us, our lives and Affectiva.
As the great Italian singer-songwriter, Lucio Battisti, sang in the ’70s «you can’t understand, but you can call them emotions» (from the original lyrics of his song entitled Emozioni: «Capire tu non puoi. Tu chiamale se vuoi, emozioni»). Is it really like that?
If we are happy, we see it. If we are angry, we understand it. If we are sad, we can “read” it in our faces. What happens when we are surprised? Our jaws drop. In all these cases, we can say that emotions are not always expressed in words. Why not? It is because facial expressions tell us much more than what we really say.
Smileys “as facial expressions”. (Source Pixabay).
In the Science of Emotions, every muscular movement of our face is called Action Unit (AU). In 1978, Paul Ekman, one of the 100 most influential psychologists of the twentieth century, and his colleague, Wallace V. Friesen, developed a facial expression coding system known as the Facial Action Coding System (FACS). Its influence came from the system previously developed by the Swedish anatomist, Carl-Herman Hjortsjö, in 1969. In 2000, Ekman made changes that updated the FACS system according to the most recent research he conducted.
According to this system, each AU reflects an emotion that cannot be hidden or concealed. At least, not all. Some people have the ability to find out when people are lying, but it’s not our topic in this article. Every emotional expression of the human face, which has a duration of a quarter of a second, is called a microexpression.
Normally, our facial expressions exceed this duration. With the FACS system, Ekman and Friesen created the taxonomy of every human facial expression. According to Ekman, facial expressions and emotions do not depend on cultural factors – as demonstrated by his research on the population of Papua New Guinea, far from the media and cultural influences of other countries. On the contrary, they are experienced all over the world. They are universal. On the basis of the universality of emotions that Ekman has recognized, every human being can facially manifest positive and negative emotions such as joy, surprise, excitement, happiness, relief, fun, anger, disgust, sadness, fear, embarrassment, satisfaction and shame.
What will facial microexpressions be used for?
Similarly, the system could recognize if we like the video we are watching, or if a videogame can adapt to the player based on his or her facial reactions.
Through machine learning and reading facial expressions, a car will be able to understand if he or she is distracted, tired or about to fall asleep. Likewise, the system could recognize if we like the video we are watching, or if we can adapt to the videogame we are playing according to our facial reactions. Therefore, there are various implications of tools for reading facial microexpressions. In this regard, an interesting experiment is one related to the understanding of the emotional states of people engaged in watching a movie.
According to TechCrunch, during the last conference of Computer Vision and Pattern Recognition (CVPR 2017), held in Honolulu, an interesting research project was presented.
CVPR is one of the most important events related to artificial vision (machine learning, deep learning, 3D vision, image motion & tracking, biomedical image & video analysis, etc.). And on that occasion, the Disney research project shows a new method for tracking real-time facial expressions in a theater in a simple and reliable way.
The researchers recorded a series of facial data during the viewing of some Disney videos, used in the project. A high-resolution infrared camera captured people’s facial movements. 16 million points were collected which created a map of human reactions during the viewing of those films, feeding a neural network. Based on the data collected, the research group used the system to predict in real-time the expression that a certain face would manifest at a given time.
A world of emotions
The researcher, Rana el Kaliouby, started from a very simple assumption: we would like to live in a world of smiles, hugs and shared emotions. Instead, we live in emotionless environments because of technology, chats, messaging systems that digitize our lives. Based on that premise, she began her research at the MIT Media Lab. Today, what started as only a university project has become a company with the world’s largest collection of videos of people who freely express their emotions.
The aim of the Rana el Kaliouby project is to bring emotions back to digital experiences. The devices we use on a daily basis have cognitive intelligence but not emotional intelligence. What difference would there be if computers or smartphones were able to understand our emotions? If at the exact time we are sending a happy smiley (:-D), the computer could really decipher that emotion and react on the basis of it? For these and many other questions, Rana el Kaliouby would like to find some answers. Rana el Kaliouby is the CEO and co-founder of the American company that has developed emotions recognition algorithms with a collection of 60 thousand videos of people who smile, laugh, surprise, get scared, get angry or sad. In short, when they are displaying the six main emotions: joy, surprise, disgust, anger, sadness, fear. Starting from this point, Rana el Kaliouby and Professor Rosalind W. Picard created a technologies company for measuring emotions: Affectiva. «Affectiva understands the importance of emotions – in every aspect of our lives. It shapes our experiences, our interactions and our decisions. Our mission is to digitize emotion, so we can enrich our technology, for life, work and play.» This is Affectiva‘s mission.
Video of the talk by Rana el Kaliouby, CEO of Affectiva, at the TED conference.
Today, Affectiva can interpret 21 human facial expressions based on videos collected in 75 different countries. Rana el Kaliouby tells how, starting from her biographical events, an idea of creating a system that could understand human emotions was born. In the video, Rana el Kaliouby shows the new technology developed by Affectiva able to read facial expressions and match them to the corresponding emotions, the six main emotions, indicating the value (how positive or negative is the experience that a person is experiencing) and the engagement (how much a person is expressive).
If eliminating technology would be impossible and crazy, at least we can try to humanize it.