The Black Lives Matter movement notched a win in Silicon Valley this week, turning police use of facial recognition technology into a litmus test for big tech's support of civil rights.
Now everyone protesting for police reform needs to hold the companies to it.
Like dominoes in a lineup of corporate public relations stunts, first IBM, then Amazon and finally Microsoft (at a Washington Post Live event) each said it won't sell or would at least pause police use of its facial recognition technology until there are federal laws on the matter.
Never mind that none of these companies were actually major players in the police facial recognition market. (Microsoft admitted it didn't sell the tech to U.S. police at all.) But civil rights leaders and privacy advocates I've spoken to this week tell me what they need is for big tech to stop arresting their legislative efforts to make the technology off-limits.
So far, Silicon Valley's record has mostly put it at odds with groups like Color of Change, the NAACP and the American Civil Liberties Union. A Microsoft employee literally wrote a new facial recognition law in Washington state opposed by many civil rights groups for not being tough enough.
What happens next will impact the lives of many Americans. Facial recognition technology uses photos to help computers identify people. You might have already encountered it to unlock your phone or board an airplane. It can also be used to identify people who don't even know they're being watched, like at a protest.
It's one of the most powerful surveillance tools ever invented, but even a federal government study found it to be less accurate at identifying minorities and women. Ramping up its use could, in theory, help keep criminals from escaping arrest - but it also opens a slippery slope to a world of supercharged policing that's likely to disproportionately impact people of color through misidentification or just more surveillance of minority communities.
Amazon also owns the connected doorbell maker Ring, which privacy groups have criticized for partnering with hundreds of police forces, granting them potential access to camera footage of many American streets. Ring doesn't currently offer facial recognition, but its video can be shared with police who have it. (Amazon CEO Jeff Bezos owns The Washington Post.)
There's no evidence of police using facial recognition technology to make arrests of people protesting the death of George Floyd, though it may take time for those records to emerge. Police in dozens of U.S. cities have access to the tech, and in several have explicitly asked citizens to share images of protesters.
What changed this week is that facial recognition got linked to police racism, the issue that's gotten Americans angry enough to protest during a pandemic and made the tech politically toxic. Previously, privacy advocates (including me) had linked it to less urgent-sounding concerns like surveillance and squashed speech.
To be clear, this week's announcements alone likely won't do much to stop the use of this technology by law enforcement. The most important players in the murky market, such as NEC Corp, Idemia and Clearview AI, are lesser-known companies that have not joined in on the voluntary moratoriums.
"Clearview AI is also committed to the responsible use of its powerful technology and is used only for after-the-crime investigations to help identify criminal suspects," the company said in a statement.
NEC said its technology could combat racism, by helping to "correct inherent biases, protect privacy and civil liberties, and fairly and effectively conduct investigations for social justice." Idemia didn't immediately reply to requests for comment.
The only thing that's really going to stop police from using the tech is new laws.
That's why the announcements by IBM, Amazon and Microsoft were a success for activists - a rare retreat by some of Silicon Valley's biggest names over a key new technology. This came from years of work by researchers including Joy Buolamwini to make the case that facial recognition software is biased. A test commissioned by the ACLU of Northern California found Amazon's software called Rekognition misidentified 28 lawmakers as people arrested for a crime. That happens, in part, because the systems are trained on data sets that are themselves skewed.
Yet facial-recognition opponents say the problems go far beyond bad software. "Yes, accuracy disparities mean already marginalized communities face the brunt of misidentifications," said Buolamwini, founder of an organization called the Algorithmic Justice League. "But the point isn't just that facial recognition systems can misidentify faces, it's that the technology itself can be used in biased ways by governments or corporations."
For example, more cameras could be pointed at minority neighborhoods, used to target immigrants or even people who join protests about police brutality.
Buolamwini has asked tech companies to sign a pledge that would prohibit the use of the their technology in contexts where lethal force may be used, including by police or the military. (So far, none of the big ones have.)
"There are too many ways in which it can be recklessly applied, and too few examples of the ways in which it serves a fundamental public good," said Brandi Collins-Dexter, Senior Campaign Director with the organization Color of Change, which opposed the California bill.
That's why she and others are calling for not just better facial recognition tech, but a stop to its use by governments.
A half-dozen cities such as San Francisco already have those sorts of laws. On Tuesday, Boston held a hearing about adopting its own ban, during which Police Commissioner William Gross said he didn't want to use the tech because it was faulty.
The challenge, say opponents of facial recognition, is that tech companies want to say they support civil rights without actually putting significant limits on potential business upside. There are potential military, international and corporate contracts at stake, largely missing from this week's promises. And weak laws could end up legitimizing police use of the tech.
Microsoft, in particular, has been trying to have it both ways. Last week CEO Satya Nadella told employees in a blog post the company would support racial justice to honor the death of Floyd. A day earlier, the company was fighting some 65 civil rights organizations in California to push a law authorizing police and companies to use facial recognition tech with some restrictions that fall far short of a moratorium.
Microsoft didn't get its way in California: AB 2261 failed in the state's legislature last week.
Microsoft was the first big tech company to call for facial recognition laws back in 2018, and has been the most visible in state house and city hall hearings. It says it opposes use of facial recognition for mass surveillance, racial profiling or other violations of basic human rights and freedoms.
"We need to use this moment to pursue a strong national law to govern facial recognition that is grounded in the protection of human rights," said Microsoft president Brad Smith at The Post event Thursday. "If all the responsible companies in the country cede this market to those that are not prepared to take a stand, we won't necessarily serve the national interest or the lives of the black and African American people."
But the company's legislative stance so far has boiled down to: Put some rules in place, sure - but not a moratorium on it.
The devil is in the details. In Washington state, Microsoft supported facial recognition legislation - sponsored by Microsoft employee State Sen. Joe Nguyen - that outlines some of how the government can use the tech, and requires agencies to produce accountability reports. And it addresses accuracy concerns by saying government agencies can only use the tech if it comes from a developer that makes its software available for testing.
But opponents said the Washington law comes with too few limits and enforcement measures. "Agencies may use face surveillance without any restrictions to surveil entire crowds at football stadiums, places of worship, or on public street corners, chilling people's constitutionally protected rights and civil liberties," said the ACLU of Washington.
Microsoft's announcement this week that it is wouldn't sell to police until there is a federal law "should feel like winning but it feels more like a thinly veiled threat," said Liz O'Sullivan, the technology director of the Surveillance Technology Oversight Project. The Washington law, she said, would be a bad model for Congress.
"They're seeding the conversations around facial recognition regulation in a number of states by lobbying for bills that might look to a lot of people like they've got really strict protections. But then, if you actually look at them, they don't really actually regulate the technology much as it's used," said Jameson Spivack, a policy associate at Georgetown Law School's Center on Privacy & Technology. "It's their way of getting ahead of the opposition and co-opting the movement."
Amazon, which didn't reply to my requests for comment, has said less in public about its legislative goals, aside from calling for federal privacy and facial-recogntion legislation. One pressing issue for any national legislation is whether it would overrule state and local laws that might be more strict.
"Legislatures and activists and civil rights groups are already leading on this issue," said Matt Cagle, Technology and Civil Liberties attorney for the ACLU of Northern California. "We just hope that companies like Microsoft see that and stand with us rather than against us."