image_pdfimage_print

Ava Kofman
The Intercept

WHEN CIVIL LIBERTIES advocates discuss the dangers of new policing technologies, they often point to sci-fi films like “RoboCop” and “Minority Report” as cautionary tales. In “RoboCop,” a massive corporation purchases Detroit’s entire police department. After one of its officers gets fatally shot on duty, the company sees an opportunity to save on labor costs by reanimating the officer’s body with sleek weapons, predictive analytics, facial recognition, and the ability to record and transmit live video.

Although intended as a grim allegory of the pitfalls of relying on untested, proprietary algorithms to make lethal force decisions, “RoboCop” has long been taken by corporations as a roadmap. And no company has been better poised than Taser International, the world’s largest police body camera vendor, to turn the film’s ironic vision into an earnest reality.

In 2010, Taser’s longtime vice president Steve Tuttle “proudly predicted” to GQ that once police can search a crowd for outstanding warrants using real-time face recognition, “every cop will be RoboCop.” Now Taser has announced that it will provide any police department in the nation with free body cameras, along with a year of free “data storage, training, and support.” The company’s goal is not just to corner the camera market, but to dramatically increase the video streaming into its servers.

With an estimated one-third of departments using body cameras, police officers have been generating millions of hours of video footage. Taser stores terabytes of such video on Evidence.com, in private servers, operated by Microsoft, to which police agencies must continuously subscribe for a monthly fee. Data from these recordings is rarely analyzed for investigative purposes, though, and Taser — which recently rebranded itself as a technology company and renamed itself “Axon” — is hoping to change that.

Taser has started to get into the business of making sense of its enormous archive of video footage by building an in-house “AI team.” In February, the company acquired a computer vision startup called Dextro and a computer vision team from Fossil Group Inc. Taser says the companies will allow agencies to automatically redact faces to protect privacy, extract important information, and detect emotions and objects — all without human intervention. This will free officers from the grunt work of manually writing reports and tagging videos, a Taser spokesperson wrote in an email. “Our prediction for the next few years is that the process of doing paperwork by hand will begin to disappear from the world of law enforcement, along with many other tedious manual tasks.” Analytics will also allow departments to observe historical patterns in behavior for officer training, the spokesperson added. “Police departments are now sitting on a vast trove of body-worn footage that gives them insight for the first time into which interactions with the public have been positive versus negative, and how individuals’ actions led to it.”

But looking to the past is just the beginning: Taser is betting that its artificial intelligence tools might be useful not just to determine what happened, but to anticipate what might happen in the future.

“We’ve got all of this law enforcement information with these videos, which is one of the richest treasure troves you could imagine for machine learning,” Taser CEO Rick Smith told PoliceOne in an interview about the company’s AI acquisitions. “Imagine having one person in your agency who would watch every single one of your videos — and remember everything they saw — and then be able to process that and give you the insight into what crimes you could solve, what problems you could deal with. Now, that’s obviously a little further out, but based on what we’re seeing in the artificial intelligence space, that could be within five to seven years.”

As video analytics and machine vision have made rapid gains in recent years, the future long dreaded by privacy experts and celebrated by technology companies is quickly approaching. No longer is the question whether artificial intelligence will transform the legal and lethal limits of policing, but how and for whose profits.

“Everyone refers to ‘Minority Report’ … about how they use facial recognition and iris recognition,” said Ron Kirk, director of the West Virginia Intelligence Fusion Center, which uses both technologies, in an interview with Vocativ. “I actually think that that is the way of the future.”

Reinforcing Bias

Taser’s corporate ethos has long been inspired by cinematic science fiction. The company’s LinkedIn page describes its Seattle headquarters as “a mix of Star Wars, James Bond, Get Smart and Star Trek.” It even boasts eye scanners and sliding doors lifted from “Men in Black.”

But the company took its sci-fi references to the next level in a little-publicized Law Enforcement Technology Report released earlier this year. In one of the interviews featured in the report, Arizona State University scientist George Poste explains that while artificially intelligent policing has yet to realize “the fully futuristic dimension of ‘RoboCop’ where you essentially have someone wearing an exoskeleton linked to advanced artificial intelligence capabilities,” or “the Tom Cruise ‘Minority Report’-level of cognitive prediction, … patterns of individual behavior will become increasingly informative in revealing the probability that an individual will act in a particular fashion.”

taser-report-screenshot-1493313256

Page from Taser’s 2017 Law Enforcement Technology Report.

Document: Taser

Overall, the report sells departments on how Taser will leverage its cloud of data “to anticipate criminal activity” and “predict future events.” “Imagine,” the report tells officers, that “you can find out if someone has a criminal record instantly — or be notified if someone’s demeanor has changed and may now be a threat.” While a tool like emotion detection is more marketing hype than imminent reality, such goals reveal the ambitions of Taser’s long-term blueprint.

The report repeatedly compares Taser’s repurposing of its video data not just to pre-crime, but to the efforts of Wal-Mart, Google, Facebook, and Microsoft, all of which scrape their respective user data to anticipate purchases, tailor text, monitor activities, and optimize search results. Taser’s AI unit is using the same cutting-edge technique as these major technology companies: deep learning.

Deep learning works by teaching computers to recognize patterns. The system is not given if-then rules; instead, it’s asked to infer associations from the large batches of data. Whereas a rule-based algorithm learns that a “cat equals two ears, narrow body, and a tail, but isn’t a rat” — and incrementally makes progress as it’s given increasingly specific rules — a deep learning system ingests a training set of hundreds of thousands of images that have been labeled as cats, lynxes, wolves, and so on. Layers of “neural networks” mimic the structure of a human brain to strengthen or weaken associations based on each correct association. But exactly how the deep learning system ultimately grasps the essence of a cat is not known; as with the juridical system for obscenity, it just knows it when it sees it.

But while the complex associations of a deep learning system are opaque even to its programmers, the training labels for its datasets are human-generated. They can also be subject to bias. Many neural networks have already been found to reveal the geographical, racial, and socio-economic positions of their human trainers even as their complexity lends them an appearance of greater objectivity. Studies show that facial recognition neural nets trained on white faces, for instance, have trouble recognizing the faces of African-Americans.

This is why artificial intelligence experts fear that the human decisions that shape the way the data is collected, labeled, and perceived might not just reinforce the racial biases of the criminal justice system, but automate them. Dextro’s deep learning system, for instance, learns to pick out objects, like stop signs, guns, and license plates, and to discern actions, like the difference between a jogger and a suspect fleeing the police.

This raw data fed into video analytics systems is itself captured and created by the police, said Elizabeth Joh, a law professor and policing expert at the University of California, Davis. “If you think about it,” she said, “some of the factors that algorithms use are products of human discretion. Crime reporting, contact cards, and arrest rates are not neutral. … You get analog facts transformed into unassailable, objective truths, and we have to be pretty skeptical about that.” Teaching the machine to look for “hoodies” may already be a reflection of human assumptions, not criminal propensity.

Taser responded that it believes body camera “video represents an important step closer to what happened at an event.” When asked about racially disparate policing practices, the spokesperson said that the “huge gain in information fidelity and transparency in video (versus text) is something that we believe can identify such bias.”

Yet body-worn cameras show the police point of view by design; additionally, their footage will likely be labeled by officers, rather than civilians, meaning that systems could be taught to classify the behaviors of certain civilians as aggressive if such categorizations helped to support the officer’s narrative in a use of force encounter.

When it comes to programs like stop and frisk in New York City or traffic violations in Ferguson, Missouri, courts have determined that decisions about who, what, and where to police can have a racially disparate impact. In her book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,” Cathy O’Neil argues that unjust decisions are reinforced when they’re programmed into computer systems that make claims to objectivity. She discusses the example of PredPol, the controversial predictive policing software first used in Los Angeles in 2009. PredPol is careful to advertise the fact that it uses geographic, rather than demographic, inputs to predict where nuisance crimes like loitering will occur. But because such crimes are already over-policed in black neighborhoods, the data fed to the algorithm is already skewed. By then sending more police to the computer-generated “loitering hotspots,” the system reinforces what O’Neil calls a “pernicious feedback loop,” whereby it justifies the initial assumptions it was fed. Any crime-predicting algorithm, O’Neil emphasizes, has the power to bring into being the world it predicts.

Taser’s investments in artificial intelligence, she added, seem like a more “scientific-sounding version of broken windows policing.” The expectation of finding crime may influence what the officers end up finding.

When questioned about the potential for predictive policing discussed in other interviews and advertised at several moments throughout the company’s 34-page report, a Taser spokesperson was more circumspect and said the company would only be using machine learning to improve “workflow” at this time. The spokesperson stated, contrary to the 2017 Taser technology report’s detailed speculations, that “Axon is not building predictive policing and will not make predictions on behalf of our customers. In addition, all Axon machine learning work is under the oversight of our AI Ethics Board that we are finalizing.”

“The ‘RoboCop’ narrative,” said Marcus Womack, an executive vice president for software and services, “doesn’t align with our mission and is a poor example of how technology can impact policing. In particular, we are not using AI technology to make decisions for officers. We see the real impact being that this technology will make police officers more human.”

Automating Suspicion

Taser isn’t the only company selling agencies on its powers of speculation. A spokesperson for the Russian company Ntechlab told me that its high-performing facial recognition algorithm is able to detect “abnormal and suspicious behavior of people in certain areas.” Several major face recognition companies have already been teaching their systems to detect anomalous behaviors in crowds. Earlier this year, IBM, which has spent over $14 billion on predictive policing, advertised that its Deep Learning Engine could pinpoint the location and identity of suspects in real time. And for the last several years, researchers funded by the Defense Advanced Research Projects Agency have been developing “automated suspicion algorithms” to predict and analyze behavior from videos, text, and online images. But as the market leader for video recording hardware, having relationships with an estimated 17,000 of the country’s 18,000 police departments, Taser’s research investments have an outsized influence on law enforcement tactics.

In an interview in Taser’s future of policing report, a senior data architect at Microsoft envisions a future in which officers receive alerts when “an individual has a known criminal record, or propensity to violence. Even if [the suspect] has not yet adopted a threatening posture, it heightens the overall threshold of awareness.”

Taser CEO Rick Smith discussed a similar vision in a recent FastCompany profile, explaining that real-time artificial intelligence technology could have aided the officer who killed Philando Castile, the 32-year-old African-American man driving with his girlfriend and her 4-year-old daughter, by alerting him to the fact that Castile had a gun license and no violent criminal record.

Legal experts and surveillance watchdogs caution, however, that any company that automates recommendations about threat assessments and suspicions may transform policing tactics for the worse.

Hamid Khan, lead organizer for the Stop LAPD Spying Coalition, contends that feeding police information in real time about an individual’s prior records may only encourage more aggressive conduct with suspects. “We don’t have to go very far into deep learning,” he said, for evidence of this phenomena. “We just have to look at the numbers that already exist for suspicious activity reporting, which doesn’t even require [advanced] analytics.” He noted that when the LAPD’s Suspicious Activity Reporting program, which relied on analog human tips, was audited by the city’s inspector general, it determined that black women residents were being disproportionately flagged.

The problem with any suspicious activity reporting, automated or not, is that suspicion always lies in the eye of the beholder. As The Intercept reported in February, the Transportation Security Administration’s own research showed that the agency’s program to detect suspicious behavior in travelers was unscientific, unreliable, and dependent on racial stereotypes.

Christoph Musik, an expert in computer vision from the University of Vienna, has written extensively about the human assumptions built into such systems. Hunches are always subjective, he points out, unlike evaluating the proposition of whether or not an object is a cat. “It is extremely difficult to formulate universal laws of behavior or suspicious behavior, especially if we focus on everyday behavior on a micro level,” Musk wrote in an email. “‘Smart’ or ‘intelligent’ systems claiming to recognize suspicious behavior are not as objective or neutral as they [seem].”

Predictions aside, the mere ability to trawl for evidence from body-worn camera footage also widens the range of “potentially suspicious persons” who can be contacted by law enforcement, according to Joh, the legal scholar of policing. “It’s a pretty radical expansion of the kind of discretion law enforcement has.” At such an indiscriminate scale, all kinds of insights and individuals get swept into an automated investigation process. “Once you’ve created a giant video database, it’s possible to search and re-search it, it’s not clear that there are any legal limits,” she said, since the Fourth Amendment focuses on the point of collection. “Generally speaking, there aren’t too many rules on what the police can do after they collect the information.”

Baltimore City police commissioner Kevin Davis, at podium, shows a sample of footage from a body camera worn by a police officer during a news conference at police headquarters on Monday, Dec. 21, 2015. (Kenneth K. Lam/Baltimore Sun/TNS via Getty Images)

Baltimore City Police Commissioner Kevin Davis, at podium, shows a sample of footage from a body camera worn by a police officer during a news conference at police headquarters on Dec. 21, 2015.

Photo: Kenneth K. Lam/Baltimore Sun/TNS/Getty Images

Private Predictions

Despite prominent civil rights groups highlighting the need for comprehensive policies, state and local level legislation has lagged in regulating who can access body-worn camera footage, how long it is stored, and who gets to see it. But the biggest impediment to making sure body-worn camera footage remains accountable might be the manufacturers themselves.

Nondisclosure agreements allow private companies like Taser to defend their proprietary computing systems from public scrutiny, Joh explained. “Typically we think we have oversight into what police can do,” said Joh. “Now we have third-party intermediary, they have a kind of privacy shield, they’re not subject to state public record laws, and they have departments sign contracts that they are going to keep this secret.”

As privately owned policing tactics become increasingly black-boxed, citizens will have no recourse to uncover how they ended up on their city’s list of suspicious persons or the logic guiding an algorithm’s decisions. In “RoboCop,” for instance, a secret rule prohibits the robot from arresting any of the owner-corporation’s board members.

Or take the case of the criminal justice consulting firm Northpointe. A ProPublica investigation of Northpointe’s algorithm used to calculate the risk of recidivism was shown to be twice as likely to incorrectly decide black defendants were at a higher risk of committing future crimes. But while reporters were able to analyze the questionnaires used by the company, which disputed ProPublica’s findings, they were unable to analyze Northpointe’s proprietary software.

Because the algorithms for these systems are often not disclosed, a judge would have no way of evaluating the likelihood of a false match when presented with investigative evidence about a suspect’s crime. Civil liberties experts find this especially disconcerting given the fact that machine learning systems make probabilistic, rather than binary, judgments. Amazon mistakenly predicting that you desire more toilet paper has vastly different implications for individual liberty than a private technology company’s cloud mistakenly telling an officer, with indefinite certainty, to react lethally to a seemingly aggressive suspect.

“Body cameras are really just a story about private influence on public policing,” Joh said. “Whoever captures the audience first wins. And Taser is capturing the entire market. They get to shape the language that we use, they get to set the agenda, they get to say ‘this is possible’ and therefore the police can do it.”

Correction: May 1, 2017
A previous version of this article stated that the company formerly known as Taser, which recently rebranded itself as Axon, acquired a computer vision startup called Fossil Group, Inc. In fact, Taser acquired a computer vision team from Fossil Group, not the company itself. The article has also been updated to note that Axon’s servers are operated by Microsoft.

Leave a Reply

Your email address will not be published. Required fields are marked *