S.T.O.P. Warns White House A.I. Strategy Is Wrong Approach

*|MC:SUBJECT|*

For Immediate Release


S.T.O.P. Warns White House A.I. Strategy Is Wrong Approach

(New York, NY 10/30/2023) – Today, the Surveillance Technology Oversight Project (S.T.O.P.), a New York-based privacy and civil rights group, warned that the White House’s plan for “safe, secure, and trustworthy” AI was the wrong approach and would enable further AI abuses. Instead, the civil rights group called for an immediate moratorium on the most harmful uses of AI and civil discovery reforms to make it easier for those harmed by AI to sue.

SEE: The White House - FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
 
“A lot of the AI tools on the market are already illegal,” said Surveillance Technology Oversight Project Executive Director Albert Fox Cahn. “The worst forms of AI, like facial recognition, don’t need guidelines, they need a complete ban. Many forms of AI simply should not be allowed on the market. And many of these proposals are simply regulatory theater, allowing abusive AI to stay on the market. Additionally, the White House is continuing the mistake of over-relying on AI auditing techniques that can be easily gamed by companies and agencies.”
 
Previously, the civil rights group called on American lawmakers to adopt a modified version of the European Union’s Liability Directive, which puts the burden of proof on defendants for certain lawsuits involving AI. New EU rules require courts to presume that many AI systems caused the alleged harm and operated unlawfully unless the presumption is rebutted by defendants. This is in sharp contrast to typical litigation procedures, where plaintiffs bear the burden of proof for each and every aspect of their case.

SEE: EU Legislation In Process Briefing – Artificial intelligence liability directive
https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf
 
Cahn continued, “Through burden switching, we can restore the status quo, ensuring that companies and governments that use AI to break the law are held accountable. The problem is that when someone uses black-box AI that discriminates, steals copywritten content, or breaks other laws, you can’t prove it. Right now, AI systems can break the law with impunity because of how hard it is to get your foot in the courthouse door. But if we follow the lead of European regulators and shift the burden to defendants, we’ll be able to restore rule of law for the algorithmic age. The White House is failing the American people by failing to prioritize real protections for those facing AI discrimination and AI policing.”

The Surveillance Technology Oversight Project is a non-profit advocacy organization and legal services provider. S.T.O.P. litigates and advocates for privacy, fighting excessive local and state-level surveillance. Our work highlights the discriminatory impact of surveillance on Muslim Americans, immigrants, and communities of color.

--END--

CONTACT: S.T.O.P. Executive Director Albert Fox Cahn.
Copyright © 2021 Surveillance Technology Oversight Project, All rights reserved.

PressLeticia Murillo