“You’re going to kill me.”
It’s chilling to hear Tony Timpa’s words on a Dallas police officer’s bodycam recording because he was right.
A few minutes after Mr. Timpa, 32, pleaded with the police for his life, it was over and his corpse was loaded into the back of an ambulance.
The bodycam footage shows a lot. It shows us that Mr. Timpa asked for help more than 30 times. It shows that the officers joked about waking Mr. Timpa up for waffles and school. But still the picture is incomplete, because the recording gives only the point of view of the police officers.
As a matter of fundamental human psychology, we’re primed to align ourselves with the actions of the subject. FMRI studies have found that our brain activates when we view first-person images in a way that it does not when viewed from the third person. Simply put: Showing the officer’s perspective makes viewers defer to their narrative. When Miami-Dade County in Florida approved $5 million for bodycams in 2015, a dissenting commissioner noted that while sports calls can be debated, with police cameras, “it’s one angle, it’s one moment in time. There are issues there.’”
Cameras that were sold to the public with the promise of increased accountability also end up reinforcing the police narrative. This dynamic is yet another example of a disturbing trend: Technological solutions to human problems often have alarming side effects that aren’t fully understood until the technology is in wide use.
One of the most crucial concerns motivating the widespread adoption of bodycams was use of force by the police. In the aftermath of the 2014 deaths of Eric Garner, Michael Brown, Tamir Rice and many others, there was a concerted effort to win over the public to bodycams with the promise of justice. When a St. Louis grand jury decided not to indict Michael Brown’s killer, his family asked the country to join in a “campaign to ensure that every police officer working the streets in this country wears a body camera.”
But the shaky, low quality of bodycam footage often adds what critics call a “deceptive intensity,” which can help justify police use of force.
One of the first major evaluations of bodycams, in 2014 by the Police Executive Research Forum, offered the tantalizing promise that “body-worn cameras are helping to prevent problems from arising in the first place by increasing officer professionalism, helping agencies evaluate and improve officer performance, and allowing agencies to identify and correct larger structural problems within the department.” The next year, when Attorney General Loretta Lynch announced a $23 million bodycam pilot program, she echoed the promise of “transparency, accountability and credibility.”
Lax departmental policies give many officers discretion over when and what to record. Combine that with lopsided privacy protections, which are more focused on protecting officers than the general public, and bodycams begin to look less like a tool to keep cops in line and more like a tool to monitor civilians. We know about Mr. Timpa’s excruciating final minutes only because of a three-year legal campaign by The Dallas Morning News to have the footage released. And that case is not an outlier.
The New York Police Department failed to release bodycam footage in 40 percent of cases where it was requested by the Civilian Complaint Review Board. Yet when the footage is favorable to the police, it is often released or privately leaked within a matter of hours. That double standard can warp the public’s understanding of how and when the police use force.
These factors would be enough to chill the campaign for bodycam adoption, but the harms go much further. Often, the tools that are sold to the public as a mechanism to hold officers accountable are turned against the very communities they are supposed to empower.
Each officer’s bodycam captures information on hundreds or even thousands of individuals that an officer sees on a given day. These recordings can reveal a lot about innocent members of the public, but the privacy impact is only amplified when combined with the emerging focus on integrating facial recognition surveillance into existing bodycams.
Facial recognition integrated with bodycams would allow the police to turn a walk down the block into a warrantless search of where people go and with whom they associate. An officer standing outside an abortion clinic or political protest, for example, could chill some of our most fundamental constitutional rights.
Concerns about perspective and selective recording that apply to bodycams also apply to videos of police interrogation. Unsurprisingly, interrogation videos shot from the perspective of a suspect are much more effective in convincing viewers that a confession was coerced compared with videos shot from the perspective of the police interrogators. But unlike with police bodycams, which are attached to officers’ uniforms, there’s no technical excuse for recording only one perspective of a conversation. Yet, at a time when close to two dozen states lack any requirement to record custodial interrogations, few, if any, mandate that cameras show interrogations from a neutral perspective.
This trend is also evident in A.I.-driven forensic science. Commissioner James O’Neill of the New York Police Department wrote in The Times recently that facial recognition is a tool to protect defendants: “The software has also cleared suspects. … When facial recognition technology is used as a limited and preliminary step in an investigation — the way our department uses it — these miscarriages of justice are less likely.” Yet this description of facial recognition bears little resemblance to the realities of his department’s practices.
In at least one documented case, this supposed bulwark against wrongful conviction led to the police texting a supposed photo match to a witness and asking, “Is this the guy?” This sort of leading question is exactly the type of contamination that can suggest a false identification to eyewitnesses, leading them to later misidentify a person in court. Again, a technology sold on the promise of accountability turned into a threat to privacy.
Each of these problems has its own solutions. For bodycams, that means new privacy safeguards and limits on officer discretion about when to record. For interrogations, it means filming them from a neutral angle and for the duration of questioning. And for facial recognition, it means a moratorium until questions about bias can be resolved.
It is wise to be wary of technical fixes to complicated problems. It’s ever clearer that the surveillance solutions can be worse than the problem they seek to fix.
Mr. Cahn (@cahnlawny) is the executive director of the Surveillance Technology Oversight Project at the Urban Justice Center, a New York-based civil rights and police accountability organization.
Like other media companies, The Times collects data on its visitors when they read stories like this one. For more detail please see our privacy policy and our publisher's description of The Times's practices and continued steps to increase transparency and protections.
Follow @privacyproject on Twitter and The New York Times Opinion Section on Facebook and Instagram.