According to Andrew Ferguson, “more than 60 American police departments use some form of “predictive policing” to guide their day-to-day operations.” Last year, Ferguson published the book The Rise of Big Data Policing which introduces the cutting edge technology that police departments around the country are beginning to implement, making their job more proactive. By compiling crime data, personal data, gang data, environmental data, surveillance data, associational data, and locational data, police are supposedly able to determine where a crime will occur before it actually happens. Additionally, police are trying to use these data driven technologies to solve the underlying socio-economic issues that breed criminal behavior.
These statistics and Ferguson’s book were particularly intriguing following our class discussion about artificial intelligence and its growing impact on different industries. Naively, I had not considered the effects of artificial intelligence on the police force and wanted to dive deeper into these advancements, especially the societal impacts of these advancements.
What is Predictive Policing?
Predictive Policing refers to the usage of mathematical, predictive and analytical techniques to identify potential criminal activity; however, different task forces use this technology in various ways. Specifically, forces in Los Angeles use computers to define crime “hot spots”. By collecting various types of data, police can determine which areas have a higher rate of crime and can be more proactive by monitoring these areas more frequently or with an increased amount of officers on site.
Additionally, the Chicago Police Department uses big data policing known as person-based targeting policing to create a “heat list” of people who will either be victims or executors of gun violence. The Chicago Police Department utilizes a black box to generate this heat list, using background information on residents and the surrounding area to determine whether or not someone is dangerous to society; as of last year, there were 1,400 people on the list. When someone is added to the hot list, usually juveniles, a detective accompanied by a social worker will go to the person’s house and inform them of their status as well as provide advice to help ensure the computer generated prediction will not come true. Police consider violence a public health problem rather than a law enforcement problem, which is why they include social workers in their big data policing tactics.
Although there are multiple uses of big data policing, studies show they are relatively ineffective. Only a few cities have seen a change in crime rates due to big data policing; most have, unfortunately, seen no overall change. Ferguson argues that politics are being affected most by the adaption of big data policing. Police forces are frequently asked what they are doing to not only fight crime, but stop it overall. When asked about their actions, police can answer with a progressive, tech-driven answer: “We are using a black box to seek out crime and stop it before it happens”. These results are shocking considering how useful artificial intelligence has become in varied professions. AI has improved overall efficiency by making menial, low level tasks computerized and giving clearer, more accurate data to humans faster. Police and law enforcement are arguably one of the most important fields, responsible for ensuring the overall safety and well being of society. Why is it then that they are not using artificial intelligence and technology to the best of its ability as other industries have been doing for years?
Is this helping or hurting society?
Ferguson argues that predictive policing and person-based targeted policing are viewed as race neutral and objective; police officers can turn to the black box when they are accused of racism, but is that really fixing the problem? We are already aware that black boxes and the algorithms they produces are biased, so giving computers the power to decide who goes on a “heat list” or which neighborhoods are crime “hot spots” will eventually generate biased results. Thus, racism is not eliminated.The perpetrators are simply shifted from people to computers. Not only is racism still prevalent, but it could arguably increase due to the unintended bias caused by computers.
The overuse of artificial intelligence could also distort policing overall. If a certain neighborhood or street is flagged to Los Angeles police department as a “hot spot,” they are more likely to visit the area, and thus more likely to use violence as they repeatedly come back to the scene. This increased violence is the exact opposite outcome of what is intended by using big data policing, but has proven likely to happen. Additionally, with an increased use of prediction policing and surveillance, there is more invasion of citizen’s privacy.
Overall, the use of big data policing and artificial intelligence in the police force seems good on the surface, but in reality could be doing more harm than good. Artificial intelligence can potentially help police obtain clear data on areas with higher crime and effectively combat it. However, AI should be used in strictly objective cases and should not decide whether or not someone will be flagged as a potential perpetrator of violence based off of their location and who they surround themselves with. There is still a need for human morals with AI in any field, especially where people’s criminal record are on the line. It will be interesting to see a greater implementation of these programs across the country and if they will become more foolproof and fulfill their original intentions of cutting down on crime.