At a time when nativism is on the rise, study reveals the universality of human emotional expression.
Whether at a birthday party in Brazil, a funeral in Kenya or protests in Hong Kong, humans all use variations of the same facial expressions in similar social contexts, such as smiles, frowns, grimaces and scowls, a new UC Berkeley study shows.
The findings, published today, December 16, 2020, in the journal Nature, confirm the universality of human emotional expression across geographic and cultural boundaries at a time when nativism and populism are on the rise around the world.
“This study reveals how remarkably similar people are in different corners of the world in how we express emotion in the face of the most meaningful contexts of our lives,” said study co-lead author Dacher Keltner, a UC Berkeley psychology professor and founding director of the Greater Good Science Center.
Researchers at UC Berkeley and Google used machine-learning technology known as a “deep neural network” to analyze facial expressions in some 6 million video clips uploaded to YouTube from people in 144 countries spanning North, Central and South America, Africa, Europe, the Middle East and Asia.
“This is the first worldwide analysis of how facial expressions are used in everyday life, and it shows us that universal human emotional expressions are a lot richer and more complex than many scientists previously assumed,” said study lead author Alan Cowen, a researcher at both UC Berkeley and Google who helped develop the deep neural network algorithm and led the study.
Cowen created an online interactive map that demonstrates how the algorithm tracks variations of facial expressions that are associated with 16 emotions.
In addition to promoting cross-cultural empathy, potential applications include helping people who have trouble reading emotions, such as children and adults with autism, to recognize the faces humans commonly make to convey certain feelings.
The typical human face has 43 different muscles that can be activated around the eyes, nose, mouth, jaw, chin, and brow to make thousands of different expressions.
How they conducted the study
First, researchers used Cowen’s machine-learning algorithm to log facial expressions shown in 6 million video clips of events and interactions worldwide, such as watching fireworks, dancing joyously, or consoling a sobbing child.
They used the algorithm to track instances of 16 facial expressions one tends to associate with amusement, anger, awe, concentration, confusion, contempt, contentment, desire, disappointment, doubt, elation, interest, pain, sadness, surprise, and triumph.
Next, they correlated the facial expressions with the contexts and scenarios in which they were made across different world regions and discovered remarkable similarities in how people across geographic and cultural boundaries use facial expressions in different social contexts.
“We found that rich nuances in facial behavior — including subtle expressions we associate with awe, pain, triumph, and 13 other feelings — are used in similar social situations around the world,” Cowen said.
For example, Cowen noted, in the video clips, people around the world tended to gaze in awe during fireworks displays, show contentment at weddings, furrow their brows in concentration when performing martial arts, show doubt at protests, pain when lifting weights, and triumph at rock concerts and competitive sporting events.
The results showed that people from different cultures share about 70% of the facial expressions used in response to different social and emotional situations.
“This supports Darwin’s theory that expressing emotion in our faces is universal among humans,” Keltner said. “The physical display of our emotions may define who we are as a species, enhancing our communication and cooperation skills and ensuring our survival.”
Reference: “Sixteen facial expressions occur in similar contexts worldwide” by Alan S. Cowen, Dacher Keltner, Florian Schroff, Brendan Jou, Hartwig Adam and Gautam Prasad, 16 December 2020, Nature.
In addition to Keltner and Cowen, co-authors of the study are Florian Schroff, Brendan Jou, Hartwig Adam, and Gautam Prasad, all at Google.