As Facial Recognition Becomes Mainstream, These Groups Are Misclassified The Most

Table of Contents

Think about the last time you went to the airport. Herded around like cattle, you get to security and instead of showing your passport or driver’s license, your face is scanned to determine your identity. Sure, it makes the line go faster, but something feels a little bit off. What if the facial recognition technology fails and doesn’t recognize you as yourself? Then what?

As larger companies develop their own facial recognition software, it is increasingly being used in the public, by the federal government, local law enforcement, and even private companies. However, as the software was initially developed using pictures of white men, racial and gender biases are commonplace. In this article, I take a deep dive into facial recognition technology and its impact on black, brown, female, trans and non-binary populations.

What Is Facial Recognition Technology and How Is It Used?

Facial recognition technology is the process of using algorithms to compare two images of faces to see if they are the same person. Today, 16 states currently allow the FBI to use facial recognition technology to compare suspects to their databases of driver’s license and ID photos.

16 states let FBI use tech to compare faces of suspected criminals to driver’s license and ID photos
16 states let FBI use tech to compare faces of suspected criminals to driver’s license and ID photos

That brings the amount of Americans in law enforcement facial recognition networks to over 163 million, or about half of the population according to Perpetual Lineup, a study of law enforcement and facial recognition from Georgetown Law’s Center on Privacy & Technology.

Aside from police usage, facial recognition is also used by the Chinese government to identify Uighurs, a large Muslim population originating in western China, by the United States Border Patrol at United States international airports, and even by a local police department in Washington County, Oregon.

Biases of Facial Recognition

facial-recognition-technology
Facial recognition technology is biased against all groups other than light-skinned, cisgender males.

As large tech companies like Google, Amazon and Microsoft have developed their own facial recognition technology, it has become more affordable and thus, more widespread. With increased usage, it’s important to determine if it’s accurate for different groups of people— light-skinned people, dark-skinned people, men, women, trans people, non-binary people— the list goes on.

Can Algorithms Be Racist?

facial-recognition-bias
Algorithms reflect the biases of their developers.

It may not sound like it makes sense for math to be racist, but the truth is, algorithms are written by people, so when people have biases, the algorithms they create can have biases, too. Deb Raji, a student at the University of Toronto’s Faculty of Applied Science and Engineering, along with staff from the Massachusetts Institute of Technology compared facial recognition softwares from large companies like Amazon, IBM and Microsoft. What she found was that women and/ or dark-skinned people weren’t as accurately recognized as lighter-skinned and/ or male people. An article from the University of Toronto reads,

Although algorithms should be neutral, Raji explains that because data sets – information used to “train” an AI model – are sourced from a society that still grapples with everyday biases, these biases become embedded into the algorithms.”

Even Microsoft admits that their facial recognition software does not accurately recognize certain skin-types and genders. In fact, when a California law enforcement agency wanted to use Microsoft’s facial recognition software for cameras on officers’ cars and bodies, Microsoft turned them down.

Their software has been tested mostly with pictures of white and/or male people, said the company’s President Brad Smith according to an article from Reuters. Previously, the company had also turned down installing facial recognition in the capital city of an unnamed country, although their technology has been used in an American prison in a limited capacity.

Don’t believe me that algorithms can be racist? Go to Google Images and type in “beautiful skin”. Here’s the first page of results:

facial-recognition-technology
Beautiful Skin on Google Images Screenshot.

As you can see, the large majority of the people pictured are female, thin and white, clearly not an accurate representation of the world population or everyone with “beautiful skin”. Instead, the search results reflect societal biases that favor white femininity as the pervasive beauty standard. In the same vein, facial recognition technology reflects societal biases against dark-skinned people, women, trans, and/ or non-binary people.

Facial Recognition Biases Against Dark Skinned People, and/or Women

facial-recognition-racist
Darker-skinned people are more likely to be misclassified by facial recognition technology.

Let’s start with Rekognition, the facial recognition technology from tech giant Amazon. According to Deb Raji’s study, Rekognition could identify the gender of light-skinned men with 100% accuracy, and it’s all downhill from there. Women in general were misclassified as men 29% of the time, and for women with darker skin, this figure climbed up to 31%.

Amazon Rekognition Software
Amazon Rekognition Software

As this program was recently piloted by police in Orlando, Florida, this brings up issues of legal discrimination. “The fact that the technology doesn’t characterize Black faces well could lead to misidentification of suspects…Amazon is due for some public pressure, given the high-stakes scenarios in which they’re using this technology.” In fact, out of the five softwares that Raji tested, Rekognition performed the worst. All in all, it was only accurate for:

  • 81.27% of women
  • 84.89% for darker-skinned people
  • 68.63% of darker women
  • 92.88% of lighter females.

As you can see, black and brown women are the group least likely to be accurately classified by Rekognition, raising discrimination concerns if used in law enforcement.

Comparison of Facial Recognition Software
Comparison of Facial Recognition Software

Raji’s study is not the only one to prove that facial recognition technology is biased against women, people with darker skin, and/or women of color. According to a study co-authored by Georgetown Law’s Perpetual Lineup and the FBI, accuracy rates dip between 5% and 10% for African American women across facial recognition softwares .

African American women had 5-10% lower accuracy rates across all facial recognition softwares
African American women had 5-10% lower accuracy rates across all facial recognition softwares

Moreover, African Americans are disproportionately likely to be subject to police facial recognition:

  • Twice as likely in Hawaii, Michigan, and Virginia
  • Three times as likely in Arizona, Los Angeles, Pennsylvania, and San Diego
  • Five times as likely in Minnesota.

As the technology has been proven to work less accurately with black and brown people, the likelihood that these people will be misclassified by police facial recognition becomes high.

Finally, an independent study from the ACLU tested Rekognition’s accuracy using pictures of all members of Congress. They measured the software against a database of publicly available arrest photos, telling it to look for matches. Out of 535 people, 28 people were misidentified as criminals, disproportionately people of color. While only 20% of Congress were people or color, 39% of the false matches were people of color, demonstrating a racial bias.

Facial Recognition Biases Against Trans and Non-Binary People

facial-recognition-trans
Most facial recognition systems misgender trans and non-binary people.

Women, dark-skinned people, and women of color aren’t the only ones disproportionately misclassified by facial recognition software. Similarly, trans and non-binary people are often misclassified, and the results are even worst for trans and non-binary people of color. In “The Misgendering Machines: Trans/ HCI Implications of Automatic Gender REcognition,” from the University of Washington Press, Ph.D student Os Keyes explained how facial recognition technology views gender as a binary rather than a spectrum, ignoring everyone who isn’t cisgender. This can be an issue as trans and non-binary people enter gendered spaces like bathrooms or locker rooms, often policed by staff.

For trans people of color, the biases of facial recognition technology are multi-fold. In “Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics,” UCLA and Columbia Professor of Law Kimberlé Crenshaw coins the word Intersectionality to think about the ways in which different forms of discrimination can overlap, particularly relevant for black women, brown women, and trans people of color. Often, women and trans people are left out of conversations surrounding discrimination which tend to focus on cisgender black men or cisgender white women. Crenshaw writes,

“In other words, in race discrimination cases, discrimination tends to be viewed in terms of sex- or class-privileged Blacks; in sex discrimination cases, the focus is on race- and class-privileged women.”

Trans-misogyny and misogynoir, a term coined by Moya Bailey in a piece for the Crunk Feminist Collective that refers to misogyny against black women specifically, are incredibly relevant to facial recognition technology. Groups of people more likely to be targets of police facial recognition are also more likely to be misclassified by this technology, the intersections of their oppressed identities making their chances of discrimination even higher.

Is There Any Non-Discriminatory Facial Recognition Technology?

Of course, not all facial recognition technology is created equal— some is more biased than others. In Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products” from the MIT Media Lab, Deb Raji worked with computer scientist Joy Buolamwini to update her previous testing of facial recognition softwares.

From their first publication in May of 2017 until their most recent published in August of 2018, the initial facial recognition softwares that they tested all improved by 5.72% to 8.3% overall, although the biases against darker subjects and female subjects still remained. The best facial recognition software was from Microsoft which correctly classified:

  • 99.23% of darker males
  • 98.48% of darker females
  • 100% of lighter males
  • 99.66% of lighter females

As you can see, the largest gap difference between any two groups is only 1.52%, the lowest overall of any software tested.

Adding on to the 2017 study, which only looked at Face++, Microsoft, and IBM, Raji and Buolamwini also tested Amazon’s Rekognition software plus technology from Kairos. Out of the all the software, Rekognition performed the worst, classifying only 68.63% of darker females correctly. Of course, it can be argued that since Amazon and Kairos weren’t on the initial study, they weren’t aware of these shortcomings in their code. However,  it can also be argued that companies shouldn’t need a public outcry to create non-biased software, especially considering its use in local police stations and federal government agencies.

The problem, Buolamwini explains, is the lack of diversity in data sets. Perpetual Lineup echoes this sentiment, recommending that companies obtain diverse data sets from the National Institute of Standards and Technology or Janus IARPA, a program from the Office of the Director of National Intelligence. Diverse data sets are available; it’s just up to the software developers to implement them into their code from the beginning.

Legal and Societal Implications of Facial Recognition Technology

As facial recognition technology is increasingly used by government entities like the police and the Department of Homeland Security, the built-in biases can manifest themselves in a number of ways.

Police Biases on False Matches

facial-recognition-biased
When police use biased facial recognition technology, minorities are at greater risk of being misclassified.

While using Rekognition may make it easier for police officers to match suspects with identities, the biases against darker-skinned people, women, trans and non-binary people make it more likely that individuals from these groups will be misidentified as a person who has previously been arrested. Jacob Snow, Technology and Civil Liberties attorney for the ACLU of Northern California wrote,

If law enforcement is using Amazon Rekognition, it’s not hard to imagine a police officer getting a “match” indicating that a person has a previous concealed-weapon arrest, biasing the officer before an encounter even begins. Or an individual getting a knock on the door from law enforcement, and being questioned or having their home searched, based on a false identification.”

Raji also had concerns regarding misidentification of black subjects, especially with Rekognition, the facial recognition software with the most bias against racial minorities.

Racial and Gender Discrimination

1 million Uighurs are held in detention camps
1 million Uighurs are held in detention camps

Even if law enforcement isn’t involved, biased facial recognition technology can still impact oppressed groups. For example in China, the government uses it to find Uighurs, the large Muslim community in the country’s Western region. According to experts, it’s the first known instance of a government using artificial intelligence for racial profiling. CloudWalk, a start-up that provides some of the technology used, had a section on their website about the software’s ability to recognize “sensitive groups of people.” Paul Mozur, technology reporter for the New York Times, wrote of the CloudWalk website,

“If originally one Uighur lives in a neighborhood, and within 20 days six Uighurs appear,” it said on its website, “it immediately sends alarms” to law enforcement.”

As of the article’s publication in April of 2019, there were over a million Uighurs held in detention camps.

As the most widespread facial recognition softwares only acknowledge gender is a binary between male and female, trans and non-binary people are at more at risk. Take public bathrooms for example. If a trans person enters a gendered bathroom and is misclassified as the opposite gender, the facility’s security personnel may call the police, punishing the trans person for faulty software.

How Facial Recognition Needs To Change

Although facial recognition isn’t inherently bad, racial and gender biases can make it harmful to historically marginalized groups, especially if used by law enforcement. Really, this technology is a reflection of a larger society plagued by patriarchy, heteronormativity and white supremacy. However, as we’ve seen with the softwares from Microsoft, IBM and Face++, this software can be improved by diversifying data sets.

Unfortunately, it seems to take a backlash from the public to inspire companies to improve their software, leaving the work in the hands of the minorities most affected by the bias and their allies. Until the federal government legislates the use of facial recognition technology in public spaces and by government entities, companies have no incentive to create non-biased software from the get-go.

Aliza Vigderman

Aliza Vigderman

Aliza is a journalist living in Brooklyn, New York. Throughout her career, her work has spanned many intersections within the tech industry. At SquareFoot, a New York-based real estate technology company, she wrote about the ways in which technology has changed the real estate industry, as well as the challenges that business owners face when they want to invest in property. At Degreed.com, an education technology website, Aliza created digital content for lifelong learners, exploring the ways in which technology has democratized education. Additionally, she has written articles for The Huffington Post as well as her own content on Medium, the online publishing platform. Aliza’s love of journalism and research stems from the excellent Journalism program at Brandeis University. At Brandeis, Aliza interned as a research assistant at the Schuster Institute for Investigative Journalism, a non-profit “news room without walls”. There, Aliza was paired with an investigative journalist and used academic databases to obtain data on everything from the suicide rates in Bhutan to local Boston court cases. Her last position was as an account executive at Yelp, educating business owners on the power of technology to increase revenue. Throughout, however, her heart remained with tech journalism, and she’s thrilled to be writing for Security Baron. When she’s not keeping afloat of the latest tech trends, Aliza likes to cook, read, and write. A former high school “Class Clown,” Aliza has completed two feature-length screenplays, a pilot, and countless comedic sketches. On her days off you can find her relaxing in Prospect Park, trying the latest flavors at Ample Hills Ice Cream, and spending time with friends and family.

Leave a Comment