×

Our award-winning reporting has moved

Context provides news and analysis on three of the world’s most critical issues:

climate change, the impact of technology on society, and inclusive economies.

 
Part of: Race and Inequality
Back to package

U.S. protests fuel calls for ban on racially biased facial recognition tools

by Avi Asher-Schapiro and Umberto Bacchi | @AASchapiro | Thomson Reuters Foundation
Thursday, 4 June 2020 21:25 GMT

ARCHIVE PHOTO: People walk past a poster simulating facial recognition software at the Security China 2018 exhibition on public safety and security in Beijing, China October 24, 2018. REUTERS/Thomas Peter

Image Caption and Rights Information

From facial recognition to hiring software, algorithms have often been found to hold racial bias

By Avi Asher-Schapiro and Umberto Bacchi

NEW YORK / MILAN, June 4 (Thomson Reuters Foundation) - Law enforcement agencies should be banned from using racially biased surveillance technology that fuels discrimination and injustice, digital and human rights groups said on Thursday, amid protests over police brutality against black Americans.

Some facial recognition systems misidentify ethnic minorities 10 to 100 times more often than white people, according to U.S. government research, raising fears of unjust arrests.

"We need to make sure technologies like facial surveillance stay out of our communities," said Kade Crockford, Director of the Technology for Liberty Program at the American Civil Liberties Union (ACLU) of Massachusetts.

Images of a white Minneapolis police officer kneeling on the neck of an unarmed black man, George Floyd, who then died, have sparked protests worldwide and exposed deep grievances over strained race relations.

The uproar has also triggered calls to address racial bias in technology as artificial intelligence is being widely adopted to automate decisions, from healthcare to recruitment, despite concerns that it could unfairly target ethnic minorities.

The ACLU is campaigning for authorities to follow the lead of cities like San Francisco and Oakland that have banned facial recognition, which is also being used by customs officials at travel checkpoints.

"People are marching in record numbers to demand justice for black communities long subject to police violence," Crockford told the Thomson Reuters Foundation.

"In response, government agencies are mounting increasingly aggressive attacks on freedom of speech and association, including by deploying dystopian surveillance technologies".

Last Friday, the U.S. Customs and Border Patrol agency flew a surveillance drone normally used for border patrols over Minneapolis, the city at the hub of the protests.

The CBP said the drone "was preparing to provide live video to aid in situational awareness at the request of our federal law enforcement partners" but was diverted back when authorities realised it was no longer needed.

POLICE

From New York to Minneapolis, police across the United States have access to facial recognition technology, which can be used to search officers' body camera footage, surveillance camera images and social media to find specific individuals.

But the algorithms used in facial recognition are trained on data sets, such as photos, which often underrepresent minorities, said Martin Tisne, head of Luminate, a philanthropic organisation focusing on digital rights issues.

This means that software used to identify a person of interest to law enforcement can struggle to recognise ethnic minority faces, which privacy advocates fear could lead to harassment of innocent people.

Major firms have tried to address criticisms by training their algorithms on more diverse data-sets but studies still reveal widespread bias.

As big brands have taken to social media to condemn racism in the wake of Floyd's death, campaigners have challenged them to back up their words with action.

The ALCU hit out at Amazon for expressing "solidarity with the Black community" on Twitter while selling governments access to Rekognition, a powerful image ID software unveiled in 2016 by the company's cloud-computing division.

"Cool tweet. Will you commit to stop selling face recognition surveillance technology that supercharges police abuse?" the ACLU asked Amazon on Twitter.

A 2018 ACLU study found that Rekognition confused African American members of the U.S. Congress with police mugshots of other people.

Amazon did not immediately reply to a request for comment.

The company said in September that it was working on proposed regulations around the fledgling technology and that all Rekognition users must follow the law.

Sarah Chander of European digital rights group EDRi said Artificial Intelligence should also be banned in predictive policing, where algorithms help decide which neighbourhoods police patrol and what kinds of crimes they prioritise.

"Governments across the world need to step up and protect communities. This means drawing red lines at certain uses of technology," she said in emailed comments.

Tisne of Luminate said he hoped the protests would push companies and governments to do more to address tech bias, including ensuring data used to train algorithms was inclusive, and that algorithms were properly tested before release.

Tech firms should also be more transparent about how their algorithms work, possibly opening up data and source codes for software, he added.

Related stories:

First-time young black women protestors say they are 'done being silent'

Protests lead brands to speak out against racism. But will they act?

Tech must diversify to avoid inbuilt bias, says Facebook's Sandberg

(Reporting by Umberto Bacchi @UmbertoBacchi and Avi Asher-Schapiro @AASchapiro, Editing by Katy Migiro. Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers the lives of people around the world who struggle to live freely or fairly. Visit http://news.trust.org)

Our Standards: The Thomson Reuters Trust Principles.

-->