Article Image
Courtesy of / Shalini Kantayya

MIT computer scientist and founder of the Algorithmic Justice League Joy Buolamwini realizes that facial recognition software can only detect her face when she is wearing a white mask.

You are stopped on the street by undercover police officers, questioned, fingerprinted, and nearly arrested when you have not committed any crime. Minutes later, they yell, “Let her go, it’s a false match!” Unbeknownst to you, a camera equipped with facial recognition software detected your face, marked you as a match with the face of an at-large criminal, and immediately alerted law enforcement.

Though it seems far-fetched, this scenario is no work of fiction for citizens of countries with robust facial recognition systems, as demonstrated in Brooklyn filmmaker and human rights activist Shalini Kantayya’s documentary film “Coded Bias.”

As a featured film under the “Making it Happen: Women in STEM” category at this year’s Athena Film Festival, “Coded Bias” explores the frontiers of both scientific and social issues. As society enters into a new technological age, the ethical considerations of artificial intelligence stand as the next challenge for computer scientists including Joy Buolamwini, a computer scientist and media lab researcher at the Massachusetts Institute of Technology, whose journey the documentary follows.

While working on a class project, Buolamwini noticed that the facial recognition software did not recognize her face. Yet when she covered her dark features with a white face mask, the software immediately registered her face. Shocked by this discovery, Buolamwini redirected her research into this technological bias she experienced.

The facial recognition software Buolamwini used was trained as a data-driven algorithm by giving the computer large amounts of data to sort and classify. She explains that this kind of algorithm is all around us; nowadays, algorithms determine whether someone can get a mortgage, rate teachers’ classroom effectiveness, and can decide who gets fired. Since these algorithms take in large data inputs and “digest” them, the creators are often unaware of why a specific output is generated.

This means that the result of the algorithm is only as strong as its original training data. Meredith Broussard, author of “Artificial Unintelligence: How Computers Misunderstand the World,” summarized this by saying, “people imbed their own biases into technology” as video clips of famous technology moguls like Steve Jobs, Bill Gates, and Jeff Bezos fade in and out of the screen. One cannot help but notice their similar appearances. Since the technology sector is dominated by these white, cisgender men, most of the training data provided for such algorithms matches their white, male creators.

Buolamwini scrolled through the data set of the facial recognition software, noting that since each face was white, “systems weren’t familiar with faces like mine.” Buolamwini herself described this, presenting a slide to the MIT Media Lab reading “data is destiny” and highlighting how skewed data leads to skewed artificial intelligence systems.

Even beyond inaccuracies for underrepresented groups, Buolamwini and many of her colleagues recognized that this kind of facial recognition software possessed a huge capacity to harm society as a whole.

“Coded Bias” closely follows Silkie Carlo, a senior advocacy officer at Liberty and co-author of “Information Security for Journalists,”  who says that these technologies are more likely to misidentify an innocent citizen than to correctly identify the criminal. In fact, Carlo says “98 percent of those matches are in fact incorrectly matching an innocent person as a wanted person.”

“Coded Bias” captured such an event in the United Kingdom, displaying members of an out-of-uniform police force halting, searching, and fingerprinting a Black 14-year-old boy for possessing a “facial match” with a criminal. His friends from school stood by watching, helpless against the police and their inaccurate facial recognition technology.

For this reason, Carlo became the director of Big Brother Watch, a British nonprofit organization dedicated to protecting citizen privacy against state surveillance. The viewer observes Carlo campaigning for Big Brother Watch by approaching members of the U.K. Parliament on a London street corner, eager to explain how facial recognition poses an infringement upon one’s privacy.

The documentary draws to a close with several of its featured computer science activists, including Buolamwini, uniting to create the Algorithmic Justice League—which aims to establish a precedent for privacy, regulation, and oversight of algorithms. The United States Congress invites this group to advise on upcoming facial recognition legislation, a climactic point in the film. Buolamwini answers the questions of New York Rep. Alexandria Ocasio-Cortez and passionately testifies for algorithm regulation with hopes of protecting marginalized groups. The camera shot flips between each of their serious visages, highlighting the tension of the situation and the enormous ramifications they could each face from Congress’s decision as women of color themselves.

As a result of the Algorithmic Justice League’s work, the U. S. Congress took steps toward banning the use of facial recognition in law enforcement on the federal level. Several cities throughout the country have already embarked on banning the use of facial recognition by law enforcement.

“Coded Bias” concludes by listing these developments in red computerized text juxtaposed by “The Blue Danube Waltz” that plays eerily in the background, leading the viewer to reflect on the future of AI.

Staff Writer Liz Radway can be contacted at liz.radway@columbiaspectator.com. Follow Spectator on Twitter @ColumbiaSpec.

Want to keep up with breaking news? Subscribe to our email newsletter and like Spectator on Facebook.

Liz Radway Athena Film Festival Coded Bias Shalini Kantayya Joy Buolamwini
ADVERTISEMENT
Newsletter
Related Stories