When I stepped into the City Hall boardroom, it was filled with the nervous energy of the first day of school or a new job. But it was something far wonkier: the inaugural meeting of the New York City Automated Decision Systems (ADS) Task Force. Excitingly, this was the first task force in the country to comprehensively analyze the impact of artificial intelligence on government. Looking at everything from predictive policing, to school assignments, to trash pickup, the people in this room were going to decide what role AI should play and what safeguards we should have.
But that’s not what happened.
Flash forward 18 months and the end of the process couldn’t be more dissimilar from its start. The nervous energy had been replaced with exhaustion. Our optimism that we’d be able to provide an outline for the ways that the New York City government should be using automated decision systems gave way to a fatalistic belief that we may not be able to tackle a problem this big after all.
THE PEOPLE IN THIS ROOM WERE GOING TO DECIDE WHAT ROLE AI SHOULD PLAY AND WHAT SAFEGUARDS WE SHOULD HAVE.”
The New York ADS Task Force was passed into law in 2017, a City Council bill responding to the growing alarm about the powerful and opaque role that automated decision systems have on our society. Badly designed systems have done everything from encourage racist over-policing of communities of color, to misallocation of health and fire resources, to false charges of benefits fraud. One Michigan system wrongly accused 40,000 residents of defrauding state unemployment insurance, driving some into destitution or even suicide.
The City Council’s initial, aggressive proposal required New York City to only use open-source decision systems. But it quickly gave way to the more politically viable compromise of a task force, which was charged with developing comprehensive guidance on how to regulate AI to ensure due process, stop government waste, and prevent bias.
And at first, the task force itself looked promising. Rather than just appointing political allies, Mayor de Blasio appointed leading experts in the field such as Meredith Whittaker of AI Now and Solon Barocase of Cornell Tech and Microsoft Research. And there was me, standing in for my boss, the executive director of the Muslim civil rights organization that I worked for at the time. As an appointee in all but title, I attended the sessions, received updates for city officials, and shared in the growing sense of alarm and despair that the process we were taking part in was going horribly wrong.
Perhaps the first sign was the schedule, a city proposal that we have just half a dozen meetings to accomplish our historic task. City officials quickly conceded more time was needed, but in hindsight it should have been a warning that officials hoped for a task force report that was more of a rubber stamp than what we dewy-eyed reformers had in mind. We longed for the chance to get the data to truly understand how these systems worked in the real world: what did they get right, what did they get wrong, and what policies would actually help the lives of New Yorkers who were being impacted by biased and broken algorithms.
WE LONGED FOR THE CHANCE TO GET THE DATA TO TRULY UNDERSTAND HOW THESE SYSTEMS WORKED IN THE REAL WORLD.”
Then the debates started, centered around the axiomatic question: What exactly is an automated decision system? Is it any computer-based automation? What about policy-driven automation, when individuals’ discretion is automated by policies and procedures that are memorialized on paper—like the NYPD Patrol Guide? Could it even extend to the city’s standardized high school entrance exam?
You see, it gets pretty hard to come up with recommendations for a thing unless you can have agreement on what that thing is. But we couldn’t reach consensus. City officials brought up the specter of unworkable regulations that would apply to every calculator and Excel document, a Kafkaesque nightmare where simply constructing a pivot table would require interagency approval. In lieu of this straw man, they offered a constricted alternative, a world of AI regulation focused on algorithms and advanced machine learning alone.
The problem is that at a moment when the world is fascinated with stories about the dire power of machine learning and other confabulations of big data known with the catchphrase “AI,” some of the most powerful forms of automation still run on Excel, or in simple scripts. You don’t need a multi-million-dollar natural-language model to make a dangerous system that makes decisions without human oversight, and that has the power to change people’s lives. And automated decision systems do that quite a bit in New York City.
But while the city’s officials hoped to confine our purview to algorithmic source code, the task force was given no details into how even the simplest of automated decision systems worked. By January 2019, there was growing anger about the city’s unwillingness to provide information on what automated decision systems it already used. This undercut the value of the task force, which aimed to escape the theories and generalizations of the ivory tower to examine how these tools were operating in the real world, using the country’s largest city as our test case. Only we never got the data.
YOU DON’T NEED A MULTI-MILLION-DOLLAR NATURAL-LANGUAGE MODEL TO MAKE A DANGEROUS SYSTEM THAT MAKES DECISIONS WITHOUT HUMAN OVERSIGHT.”
There were delays, and obfuscations, and then, by the spring of 2019, outright denials. It wasn’t because the data didn’t exist: The city had information on existing systems, including data that can help third parties understand if a model is being used in a context where it is likely to discriminate or have other adverse impacts. They even had model cards, which provide performance data, an explanation of intended use, and details on training and evaluation data, for dozens of different systems, all of which they kept to themselves. This is the sad reality of city politics. The City Council passed the task force into law to hold the administration accountable for its secretive use of algorithms, but that was the last thing the administration wanted.
I stopped attending task force meetings in January because I left my prior job to found the Surveillance Technology Oversight Project, a nonprofit fighting to protect New Yorkers’ privacy. But even after I left my role in the task force, I continued to stay closely tied into the work. I strategized with other members, I testified to City Council oversight hearings, I even spoke openly to the press about the dysfunction. But all to no avail. Despite the work of the other reformers and me, the final blow to our hopes came just last week, when the task force’s report was finally published.
The document holds the air of a college paper hastily prepared by a student the day before the deadline. Of the document’s 36 pages, more than half is allocated to simply explaining the task force’s history, presenting the CVs of the members, and providing thanks to those groups that testified at task force hearings. The group’s recommendations, the entire point of its existence, are just eight pages long.
AND THUS DIED THIS FIRST VALIANT EFFORT AT MUNICIPAL ALGORITHMIC ACCOUNTABILITY.”
Rather than providing a thoughtful critique of how specific systems succeed and fail, the document gives a passing reference to an array of concerns, ranging from bias, to funding, to regulatory burden. And thus died this first valiant effort at municipal algorithmic accountability.
But while this opportunity for oversight may have disappeared, the dangers posed by government algorithmic systems have only grown. A comprehensive, in-depth analysis of the ways that governments use AI to make decisions about people’s lives is more urgent than ever before. The only question is whether other cities will have the political will to do more than perform a transparency shadow play and actually pull back the curtain on their algorithms.
Albert Fox Cahn is the founder and executive director of the Surveillance Technology Oversight Project (S.T.O.P.), a New York-based civil rights and privacy group, and a fellow at the Engelberg Center for Innovation Law & Policy at N.Y.U. School of Law.