Law Enforcement and the Ethical Uses of AI: Can Police Prevent Crime and Protect Civil Rights?

Artificial intelligence (A.I.) in law enforcement has been in use for the past few years. Local police departments have been using machine learning to identify patterns in social media data, looking for trigger phrases that can indicate criminal intent or provide clues to solve a crime.

These activities are raising concerns among researchers, journalists, and civil rights activists. The questions are many. Was the data gathered legally? Will it be used ethically? Are police protecting citizens’ right to privacy?

From a technological and social science perspective, still more questions arise. First and foremost, can the data gathered by these tools even support a police department’s mission to prevent crime and keep communities safe?

The answer is: It depends on how the search terms and the algorithms have been designed. The good: data that is properly and ethically gathered, with a limited scope that protects individuals’ Constitutional rights. The bad: wide-ranging searches based on poorly understood data analysis tools, returning results that are also poorly understood. And the ugly: search strings that reflect an untrained police officer’s bias and collect data on individuals, leading to ramped-up monitoring of legal activities and resulting in mass surveillance without evidence of a crime.

Using A.I. Responsibly

So how can police departments use A.I. search tools ethically and effectively, without violating individuals’ Constitutional rights? As a data and psychologist who uses computer algorithms and cognitive models to research human behavior, I have some suggestions.

Review Boards
Academic researchers must submit their research proposals to an institutional review board (IRB) to make sure that the research plan is well thought out and ensure that the research will be conducted ethically. Similarly, police departments that use A.I. data mining tools should be required to justify their algorithms and search strategies, much as they must go to a judge for a warrant.

Training
At the university, researchers must have ethical research training before they can conduct a study. Therefore, police officers should be rigorously trained in understanding algorithms, machine learning, and ethics before using these tools. The thing about algorithms is that they will do what you tell them to do. If you train an algorithm on certain search terms, that’s what they’ll return, and the better they’ll get at it. If a search term is biased, the results and the analysis based on the results will also be biased. It’s important that police officers understand what the data actually means and how it could be biased as safety and civil right violations could be a consequence of inexperience.

Audits
Even when academic research studies have been approved, we still undergo regular audits to make sure that we’re on track and are continuing to gather data ethically. Regular audits of the tool, the data, and the users will help ensure that search tools are effective at preventing crimes and keeping the community safe. Audits can also help make sure that the scope of the search hasn’t expanded or breached individuals’ civil rights.

Good Data In, Good Data Out

One way to ensure the effective and ethical use of these tools is to make sure that the algorithm has good data to work with. Let’s say we want to train the algorithm using case studies on past crime. If we looked back at the social media presence of perpetrators of petty crime or a violent crime, we could analyze that presence and the language that they used. There are all sorts of linguistic categories that can be used to parse language. If we feed these categories to the computer along with other search terms, now the algorithm knows what to look for. Algorithms excel at this kind of data capture. They are designed to pick up patterns for analysis.

Now That the Genie’s Out of the Bottle

Modern law enforcement’s use of A.I. in data mining social media will likely expand. While I support the work that police do in keeping communities safe, I want to see an effort by local law enforcement to take these tools seriously, and not just get carried away by the promise of catching the bad guys. These tools are effective if they are used properly. Training and oversight are essential.

Algorithms and machine learning are powerful technology that can provide insights into patterns of human behavior and help us understand how people think. Used poorly, not only will they reinforce existing biases, but they will also be ineffective in keeping communities safe. Used well, these tools can be a tremendous resource for law enforcement and other government agencies. Like every powerful tool, they need checks and balances to make sure we’re using them properly. Police departments that make the investment in proper training and put these checks and balances in place will be more effective at meeting their goals of serving their communities.


Catherine Neubauer, Ph.D., Research Psychologist and Lecturer in the online Master of Science in Applied Psychology Program at the University of Southern California and the Army Research Lab.