EFF: Clearview AI—Yet Another Example of Why We Need A Ban on Law Enforcement Use of Face Recognition Now
This week, additional stories came out about Clearview AI, the company we wrote about earlier that’s marketing a powerful facial recognition tool to law enforcement. These stories discuss some of the police departments around the country that have been secretly using Clearview’s technology, and they show, yet again, why we need strict federal, state, and local laws that ban—or at least press pause—on law enforcement use of face recognition.
Clearview’s service allows law enforcement officers to upload a photo of an unidentified person to its database and see publicly-posted photos of that person along with links to where those photos were posted on the internet. This could allow the police to learn that person’s identity along with significant and highly personal information. Clearview claims to have amassed a dataset of over three billion face images by scraping millions of websites, including news sites and sites like Facebook, YouTube, and Venmo. Clearview’s technology doesn’t appear to be limited to static photos but can also scan for faces in videos on social media sites.
Clearview has been actively marketing its face recognition technology to law enforcement, and it claims more than 1,000 agencies around the country have used its services. But up until last week, most of the general public had never even heard of the company. Even the New Jersey Attorney General was surprised to learn—after reading the New York Times article that broke the story—that officers in his own state were using the technology, and that Clearview was using his image to sell its services to other agencies.
All of this shows, yet again, why we need to press pause on law enforcement use of face recognition. Without a moratorium or a ban, law enforcement agencies will continue to exploit technologies like Clearview’s and hide their use from the public.
Law Enforcement Abuse of Face Recognition Technology Impacts Communities
Police abuse of facial recognition technology is not theoretical: it’s happening today. Law enforcement has already used “live” face recognition on public streets and at political protests. Police in the UK continue to use real-time face recognition to identify people they’ve added to questionable “watchlists,” despite high error rates, serious flaws, and significant public outcry. During the protests surrounding the death of Freddie Gray in 2015, Baltimore Police ran social media photos against a face recognition database to identify protesters and arrest them. Agencies in Florida have used face recognition thousands of times to try to identify unknown suspects without ever informing those suspects or their defense attorneys about the practice. NYPD officers appear to have been using Clearview on their personal devices without department approval and after the agency’s official face recognition unit rejected the technology. And even Clearview itself seems to have used its technology to monitor a journalist working on a story about its product.
Law enforcement agencies often argue they must have access to new technology—no matter how privacy invasive—to help them solve the most heinous of crimes. Clearview itself has said it “exists to help law enforcement agencies solve the toughest cases.” But recent reporting shows just how quickly that argument slides down its slippery slope. Clifton, New Jersey officers used Clearview to identify “shoplifters, an Apple Store thief and a good Samaritan who had punched out a man threatening people with a knife.” And a lieutenant in Green Bay, Wisconsin told a colleague to “feel free to run wild with your searches,” including using the technology on family and friends.
Widespread Use of Face Recognition Will Chill Speech and Fundamentally Change Our Democracy
Face recognition and similar technologies make it possible to identify and track people in real time, including at lawful political protests and other sensitive gatherings. Widespread use of face recognition by the government—especially to identify people secretly when they walk around in public—will fundamentally change the society in which we live. It will, for example, chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate with others. Countless studies have shown that when people think the government is watching them, they alter their behavior to try to avoid scrutiny. And this burden falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups.
The right to speak anonymously and to associate with others without the government watching is fundamental to a democracy. And it’s not just EFF saying that—the founding fathers used pseudonyms to debate what kind of government we should form in this country in the Federalist Papers, and the Supreme Court has consistently recognized that anonymous speech and association are necessary for the First Amendment right to free speech to be at all meaningful.
What Can You Do?
Clearview isn’t the first company to sell a questionable facial recognition product to law enforcement, and it probably won’t be the last. Last year, Amazon promotional videos encouraged police agencies to acquire that company’s face “Rekognition” technology and use it with body cameras and smart cameras to track people throughout cities; this was the same technology the ACLU later showed was highly inaccurate. At least two U.S. cities have already used Rekognition.
But communities are starting to push back. Several communities around the country as well as the state of California have already passed bans and moratoria on at least some of the most egregious government uses of face recognition. Even Congress has shown, through a series of hearings on face recognition, that there’s bipartisan objection to carte blanche use of face recognition by the police.
EFF has supported and continues to support these new laws as well as ongoing legislative efforts to curb the use of face recognition in Washington, Massachusetts, and New York. Without an official moratorium or ban, high-level attorneys have argued police use of the technology is perfectly legal. That’s why now is the time to reach out to your local city council, board of supervisors, and state or federal legislators and tell them we need meaningful restrictions on law enforcement use of face recognition. We need to stop the government from using this technology before it’s too late.
Comments
Post a Comment