By Mahima Arya and Nina Loshkajian
In 2020, New York became a national civil rights leader, the first state in the country to ban facial recognition in schools. But more than two years later, state officials are examining whether to reverse course and give a passing grade to this failing technology.
Wasting money on biased and faulty tech will only make schools a harsher, more dangerous environment for students, particularly students of color, LGBTQ+ students, immigrant students, and students with disabilities. Preserving the statewide moratorium on biometric surveillance in schools will protect our kids from racially biased, ineffective, unsecure and dangerous tech.
Biometric surveillance depends on artificial intelligence, and human bias infects AI systems. Facial recognition software programmed to only recognize two genders will leave transgender and nonbinary individuals invisible. A security camera that learns who is “suspicious looking” using pictures of inmates will replicate the systemic racism that results in the mass incarceration of Black and brown men. Facial recognition systems may be up to 99 percent accurate on white men, but can be wrong more than one-in-three times for some women of color.
What’s worse, facial recognition technology has even higher inaccuracy rates when used on students. Voice recognition software, another widely known biometric surveillance tool, echoes this pattern of poor accuracy for those who are nonwhite, non-male, or young.
The data collected by biometric surveillance technologies is vulnerable to a variety of security threats, including hacking, data breaches and insider attacks. This data – which includes scans of facial features, fingerprints, and irises – is unique and highly sensitive, making it a valuable target for hackers and, once compromised, impossible to reissue like you would a password or PIN. Collecting and storing biometric data in schools, which tend to have inadequate cybersecurity practices, puts children at great risk of being tracked and targeted by malicious actors. There is absolutely no need to expose children to these privacy and safety risks.
The types of biometric surveillance technology being marketed to schools are widely recognized as dangerous. One particularly controversial vendor of facial recognition technology, Clearview AI, has reportedly tested or implemented its systems in more than 50 educational institutions across 24 states. Other countries have started to appreciate the threat Clearview poses to privacy, with Australia recently ordering it to cease its scraping of images. And last year, privacy groups in Austria, France, Greece, Italy and the U.K. filed legal complaints against Clearview. All while the company continues to market its products to schools in the U.S. As the world begins to wake up to the risks of using facial recognition, New York should not make the mistake of allowing young kids to be subjected to its harms. Additionally, one study found that CCTV systems in U.K. secondary schools led many students to suppress their expressions of individuality and alter their behavior. Normalizing biometric surveillance will bring about a bleak future for kids at schools across the country.
New York shouldn’t waste money on tech that criminalizes and harms young people. Most school shootings are committed by current students or alumni of the school in question, faces of whom would not be flagged as suspicious by facial recognition systems. And even if the technology were to flag a real potential perpetrator of violence, given the speed at which most school shootings usually come to an end, it is unlikely that law enforcement would be notified and able to arrive to the scene in time to prevent such horrendous acts.
Students, parents and stakeholders have the opportunity to submit a brief survey to let the State Education Department know that they want facial recognition and other biased AI out of their schools, not just temporarily but permanently. New York must at least extend the moratorium on biometric surveillance in schools, and ultimately should put an end to the use of such problematic technology altogether.
Mahima Arya is a computer science fellow at the Surveillance Technology Oversight Project (S.T.O.P.), a human rights fellow at Humanity in Action, and a graduate of Carnegie Mellon University. Nina Loshkajian is a D.A.T.A. Law Fellow at S.T.O.P. and a graduate of New York University School of Law.