Sometimes, the cure is worse than the disease. When it comes to the dangers of artificial intelligence, badly crafted regulations that give a false sense of accountability can be worse than none at all. This is the dilemma facing New York City, which is poised to become the first city in the country to pass rules on the growing role of AI in employment.
More and more, when you apply for a job, ask for a raise, or wait for your work schedule, AI is choosing your fate. Alarmingly, many job applicants never realize that they are being evaluated by a computer, and they have almost no recourse when the software is biased, makes a mistake, or fails to accommodate a disability. While New York City has taken the important step of trying to address the threat of AI bias, the problem is that the rules pending before the City Council are bad, really bad, and we should listen to the activists speaking out before it’s too late.
Some advocates are calling for amendments to this legislation, such as expanding definitions of discrimination beyond race and gender, increasing transparency, and covering the use of AI tools in hiring, not just their sale. But many more problems plague the current bill, which is why a ban on the technology is presently preferable to a bill that sounds better than it actually is.
Industry advocates for the legislation are cloaking it in the rhetoric of equality, fairness, and nondiscrimination. But the real driving force is money. AI fairness firms and software vendors are poised to make millions for the software that could decide whether you get a job interview or your next promotion. Software firms assure us that they can audit their tools for racism, xenophobia, and inaccessibility. But there’s a catch: None of us know if these audits actually work. Given the complexity and opacity of AI systems, it’s impossible to know what requiring a “bias audit” would mean in practice. As AI rapidly develops, it’s not even clear if audits would work for some types of software.
Even worse, the legislation pending in New York leaves the answers to these questions almost entirely in the hands of the software vendors themselves. The result is that the companies that make and evaluate AI software are inching closer to writing the rules of their industry. This means that those who get fired, demoted, or passed over for a job because of biased software could be completely out of luck.
But this isn’t just a question about regulations in one city. After all, if AI firms can capture regulations here, they can capture them anywhere—and this is where this local saga has national implications.
Even with some modifications, the current legislation risks further setting back the fight against algorithmic discrimination—as highlighted in a letter signed by groups such as the NAACP Legal Defense and Education Fund, the New York Civil Liberties Union, and our own organization, the Surveillance Technology Oversight Project. To start, the bill’s definition of an employment algorithm doesn’t capture the wide range of technologies that are used in the employment process, from applicant tracking systems to digital versions of psychological and personality assessments. While the bill could apply to some software firms, it largely lets employers—and New York City government agencies—off the hook.
Beyond these problems, automated résumé-reviewers themselves can create a feedback loop that further excludes marginalized populations from employment opportunities. AI systems “learn” who to hire based on past hiring decisions, so when the software discriminates for or against one group of workers, those data “teach” the system to discriminate even more in the future.
One of the leading proponents of the New York City legislation, Pymetrics, claims to have developed the tools to “de-bias” their hiring AI, but as with many other firms, their claims largely have to be taken on faith. This is because the machine learning systems that are used to determine an employee’s fate are often too complex to meaningfully audit. For example, while Pymetrics may take steps to eliminate some kinds of unfairness in their algorithmic model, that model is just one point of potential bias in a broader machine learning system. This would be like saying that you know a car is safe to drive simply because the engine is running well; there’s a lot more that can go wrong in the machine, whether it’s a flat tire, bad brakes, or any number of other faulty parts.
Algorithmic auditing holds much potential to identify bias in the future, but the truth is that the technology isn’t yet ready for prime time. It’s great when companies want to use the technology on a voluntary basis, but it’s not something that can be easily imported into a city or state law.
But there is a solution that is available, one that cities such as New York can implement in the face of a growing number of algorithmic hiring tools: a moratorium. We need time to create rules of the road, but that doesn’t mean this terrible technology should be allowed to flourish in the interim. Instead, New York could take the lead in pressing pause on AI hiring tools, telling employers to use manual HR techniques until we have a framework that works. It’s not a perfect solution, and it may slow down some technology that helps, but the alternative is giving harmful tools the green light—and creating a false sense of security in the process.
Albert Fox Cahn (@FoxCahn) is the founder and executive director of the Surveillance Technology Oversight Project (S.T.O.P.), a New York–based civil rights and privacy group, and a fellow at Yale Law School’s Information Society Project and the Engelberg Center for Innovation Law & Policy at New York University School of Law.
Justin Sherman (@jshermcyber) is the technology adviser to the Surveillance Technology Oversight Project and cofounder of Ethical Tech at Duke University.