By Evan Selinger
Americans have tended to be optimistic about technology, so it’s striking to see that many of us have mixed feelings about the latest technologies. Artificial intelligence and robotics promise to improve our quality of life, especially in health care and transportation. But there are also deep concerns that our tools have become too powerful and must be reined in. Some critics (including me) say that the most controversial technologies, like facial recognition, can’t be integrated into a just society and should be banned.
Orly Lobel, a law professor at the University of San Diego, worries that tech criticism has gone too far. She says well-meaning privacy advocates like me are dramatically underestimating the possibilities for fixing flaws like algorithmic biases. Lobel argues in her new book, “The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future,” that technology can make society fairer. This means getting past the gloom and doom and “proposing positive uses, progressive improvements, creative solutions, and systematic safeguards.” Our conversation has been condensed and edited.
You write that your experience in the Israeli military influenced your views about technology as a force for equality. How so?
I was a data analyst in Israeli Military Intelligence before I went to law school and came to the United States. Based on the perception of the risks men could take in combat situations, there was segregation, and women had very limited opportunities. I wasn’t eligible to be a fighter pilot like the man I ended up marrying. Computers and digitization expanded our possibilities. Technology reduced the importance of physical differences and provided a paper trail to identify and trace back who worked on significant breakthroughs.
What specifically are you advocating for when you say the goal of equality should be embedded in every digital advance?
In every aspect of integrating technology, we have choices to make. If we’re doing algorithmic decision-making, are we checking outputs for diverse outcomes? Or are we just replicating what we’ve done in the past? Are we challenging stereotypes when designing technology like the humanoid robots that are getting integrated into our homes and workspaces? Equality, in this sense, is contextual. It depends on what we’re using technology to achieve.
What, then, is equality?
The law and our social contracts give us some definitions that we can all agree with. For example, we want antidiscrimination in the workplace. People shouldn’t be judged based on protected identities. But I think that technology is opening possibilities to have an even richer definition of equality. We have to think about things like distributive justice to ensure that the benefits of progress are distributed in ways that really benefit everybody. Technology allows us to reimagine justice, to find new ways to promote fairness.
Take the context of screening applicants in the workplace. If we design our algorithms to find people similar to those hired in the past, the outputs will amplify past wrongs and inequities. But if you have the goal of equality, you’re seeking to train algorithms to be more exploratory to increase diversity in hiring.
Here’s another example. We can use machine learning to look at job ads to screen for biases and related features to pick up on things the human eye wouldn’t catch. Everything from font choice to using bullet points to selecting words like sports analogies can deter some of the best employees, premier talent, from applying.
Technology alone cannot solve the hardest questions we have always faced about what makes a fair and equal society. But if there is a democratic will for true change, technology helps us take a more robust and proactive approach to equality, detecting root causes of inequities, redistributing resources in more sustainable ways, and tackling underlying impediments to inclusion.
RELATED: Ideas | A warning from China's surveillance state: It could happen here
Professors are accused of being biased. Should universities use AI to screen their material?
We’ve decided that academia is a very special place that promotes the freedom of thought and ideas. That’s why we have tenure. It’s perfectly reasonable to say that even though technology can give us a lot of information, this is not a place where we want to use it.
Should schools use AI to help students detect biases in their writing?
There’s been a lot of reporting on how Google searches are linked to biased images, like searching for a CEO and seeing white men on the first 10 pages. In a similar way, it’s not a bad idea if a student is writing an essay to flag if the nurse is a woman and the doctor is man. Maybe after seeing this information the student would want to reverse the stereotypes.
Why do you think privacy gets in the way of equality?
The media and some policymakers and politicians keep warning us about surveillance, telling us that monitoring and extracting data is particularly bad for vulnerable people. That’s descriptively incorrect in a lot of contexts where knowing more can help correct a lot of biases.
Giving information to LinkedIn and even social media platforms vastly expands people’s circles beyond the all-boys network that’s shaped word-of-mouth hiring in the past. And there’s an app called Know Your Worth that crowdsources salary information. This is really important information because there’s been a stagnant racial and pay gap. Employers know the pay scales and salaries they offer. But lots of employees don’t know that they’re being underpaid.
The same is true in many other contexts beyond salary and employment, such as health, safety, education, politics, and community development. We have to be open to giving up information for the collective good.
RELATED: Ideas | Evan Selinger: Facebook's next privacy nightmare will be a sight to see
Were privacy advocates wrong to say companies shouldn’t use flawed technology to scan job applicants’ facial expressions to determine their emotions?
Something can be flawed and still better than what we’ve been doing. There are so many behavioral biases and flaws — cognitive fails —during the hiring process. Having been on appointment committees, I’ve seen it, time and again, where a colleague says something like, this person who happens to be a woman of color doesn’t have the right affect. We don’t need technology to be perfect to accept that it can do better on average than a human and that integrating it’s a good idea. We can test the outputs to see if the technology produces diverse results compared to what we had in the past or get with a different product.
A lot of times technology is being repealed too fast. We’re imposing double standards on algorithms versus humans.
Are you saying the value of privacy is overstated?
Yes. Privacy is just one of many things we value in our society, and it’s often in tension with other goals we’re trying to achieve. We’ve seen this problem in the life-and-death context of a global pandemic where we’ve privileged privacy and anti-surveillance. There should have been a mandatory, centralized, government-sponsored digital contact tracing effort. But privacy was put ahead of public health.
Why do you think there’s so much emphasis on privacy in our society?
We have a history here in the United States of valuing civil liberties more than social rights. In the Digital Age, we’re constantly told that the government and big tech are extracting data to manipulate and harm us. But there’s not enough nuance and precision to clarify what harms we’re afraid of. And we rarely ask what we lose when we’re not collecting data and we instead maintain the status quo.
Evan Selinger is a professor of philosophy at the Rochester Institute of Technology, an affiliate scholar at Northeastern University’s Center for Law, Innovation, and Creativity, and a scholar in residence at the Surveillance Technology Oversight Project (S.T.O.P.).