Face Facts: NY must ban facial recognition
By Tristan Burchett
Facial recognition surveillance technology is creeping into every corner of our lives, from shopping to traveling to simply walking down the street past ubiquitous security cameras. It’s the darling of surveillance companies, law enforcement, and airports. But far from the accurate infrared technology used to unlock your phone, facial recognition surveillance tech (FRT) uses low quality images to deliver troubling misidentification, deep bias, and real-world harm.
Few of us are untouched by facial recognition surveillance technology. By 2016, more than half of Americans were in a law enforcement facial recognition system. And the technology isn't just limited to the precinct. It's used by stores, venues, office buildings, and landlords. Now companies like Clearview AI have expanded that reach by sweeping up billions (yes, billions) of our images from the internet into massive FRT systems, ready for governments and private businesses to use—all without your consent.
Facial recognition firms sell a myth of tech on the verge of perfection -- in the lab. They boast about supposedly low rates of "failure" where no match is returned, using artificial tests with perfect images. But this ignores two critical issues: its incorrect matches, which harm real people, and abysmal real-world performance.
Let's consider misidentifications. Facial recognition proponents like to emphasize how often FRT points a finger at someone but ignore how often it's a false accusation. To lower the failure rate where FRT draws a blank, the algorithms generally have to become more forgiving of mismatches. This jacks up the rate of false identifications. Take Clearview AI, a major "top performing" FRT vendor used by thousands of law enforcement departments, as an example. Clearview can reduce the number of people it fails to provide a match for to 5%, but at the cost of falsely identifying a whopping 20% of people (when the correct match is not in the database). And this is in artificial tests with airport-type kiosk images. In the real world, this means one in five people would be wrongly flagged, investigated, or even detained. Consider that next time you wait in an airport line.
Furthermore, law enforcement and others often use this shaky technology without limiting false identification levels. Then FRT algorithms often spit out innocent, incorrectly identified people just to identify someone -- anyone -- as guilty. If the database lacks the true match, every returned match is an innocent person thrown into the spotlight of damaging investigation. This shatters the myth that everyday people don't need to worry about FRT.
But here's the unspoken issue: FRT's terrible real-world accuracy. The failure rates that FRT vendors tout are from ideal images under controlled conditions. But the real world doesn't work like that. It has weird angles, bad lighting, and unexpected obstructions that reduce accuracy. Vaunted “accuracy” plummets 50-100 fold with high-quality sideview images simply used instead of front-facing photos in artificial tests. Major "high-performing" algorithms from Paravision, ClearView AI, Thales (Cogent), and RealNetworks (SAFR) fail to match people in a database 14-50% of the time with these ideal sideview photos. In the real world, Paravision-based ID.me used by twenty-seven states’ unemployment systems fails 10-30% of the time, locking out those in need from essential services.
And when it comes to false identification in the real world, things are even more perilous. In fact, in 2020, Detroit's police chief admitted their facial recognition technology misidentifies people a staggering 96% of the time. The technology's so-called accuracy is a statistical house of cards.
What’s worse, these failures aren’t spread evenly. Women are misidentified two to three times more often than men by the "top performing" systems mentioned. Asian people are wrongly identified three to nine times more often than white people by them. Black people fare even worse, being misidentified 7 to 18 times more often. For Native Americans, FRT disparities skyrocket to two to four times Black people's already staggering levels. Facing compound disparities, older Black women's false identifications soar to 65-500 times those of young white men. In the real world, FRT police use across 1136 cities has led to greater racial disparities in arrests. These aren't just statistics. They are people's lives disrupted and rights trampled by wrongful arrests, job denials, and worse.
In the real world, where surveillance images are imperfect and lives are on the line, it's time to place controls on facial recognition surveillance technology. 21 cities and several states already have taken action to ban FRT government or police use.
New York stands at a pivotal moment to join in curbing unethical, dangerous FRT. Proposed laws in New York state and city would stop landlords from using biometric surveillance to invade the privacy of our homes. Additional bills would protect us from facial surveillance in stores and other public venues, like the disastrous Rite Aid FRT roll-out that led to thousands of wrongful investigations, including the search of an innocent 11-year-old girl. Most powerfully, a proposed state ban on police use would curb biased FRT from damaging people's lives.
Technology should enhance our lives, not diminish our safety and freedoms. Our surveillance future isn’t written yet. Now is the time to hit the stop button on facial recognition surveillance technology.
Burchett is a member of S.T.O.P.’s Junior Board.