Big-Data Policing: Is it Helping or Hurting Society?

According to Andrew Ferguson, “more than 60 American police departments use some form of “predictive policing” to guide their day-to-day operations.” Last year, Ferguson published the book The Rise of Big Data Policing which introduces the cutting edge technology that police departments around the country are beginning to implement, making their job more proactive. By compiling crime data, personal data, gang data, environmental data, surveillance data, associational data, and locational data, police are supposedly able to determine where a crime will occur before it actually happens. Additionally, police are trying to use these data driven technologies to solve the underlying socio-economic issues that breed criminal behavior. rise of big data

These statistics and Ferguson’s book were particularly intriguing following our class discussion about artificial intelligence and its growing impact on different industries. Naively, I had not considered the effects of artificial intelligence on the police force and wanted to dive deeper into these advancements, especially the societal impacts of these advancements.

What is Predictive Policing?

Predictive Policing refers to the usage of mathematical, predictive and analytical techniques to identify potential criminal activity; however, different task forces use this technology in various ways. Specifically, forces in Los Angeles use computers to define crime “hot spots”. By collecting various types of data, police can determine which areas have a higher rate of crime and can be more proactive by monitoring these areas more frequently or with an increased amount of officers on site.

 

hot spot Additionally, the Chicago Police Department uses big data policing known as person-based targeting policing to create a “heat list” of people who will either be victims or executors of gun violence. The Chicago Police Department utilizes a black box to generate this heat list, using background information on residents and the surrounding area to determine whether or not someone is dangerous to society; as of last year, there were 1,400 people on the list. When someone is added to the hot list, usually juveniles, a detective accompanied by a social worker will go to the person’s house and inform them of their status as well as provide advice to help ensure the computer generated prediction will not come true. Police consider violence a public health problem rather than a law enforcement problem, which is why they include social workers in their big data policing tactics.

Although there are multiple uses of big data policing, studies show they are relatively ineffective. Only a few cities have seen a change in crime rates due to big data policing; most have, unfortunately, seen no overall change. Ferguson argues that politics are being affected most by the adaption of big data policing. Police forces are frequently asked what they are doing to not only fight crime, but stop it overall. When asked about their actions, police can answer with a progressive, tech-driven answer: “We are using a black box to seek out crime and stop it before it happens”. These results are shocking considering how useful artificial intelligence has become in varied professions. AI has improved overall efficiency by making menial, low level tasks computerized and giving clearer, more accurate data to humans faster. Police and law enforcement are arguably one of the most important fields, responsible for ensuring  the overall safety and well being of society. Why is it then that they are not using artificial intelligence and technology to the best of its ability as other industries have been doing for years?

Is this helping or hurting society?

Ferguson argues that predictive policing and person-based targeted policing are viewed as race neutral and objective; police officers can turn to the black box when they are accused of racism, but is that really fixing the problem? We are already aware that black boxes and the algorithms they produces are biased, so giving computers the power to decide who goes on a “heat list” or which neighborhoods are crime “hot spots” will eventually generate biased results. Thus, racism is not eliminated.The perpetrators are simply shifted from people to computers. Not only is racism still prevalent, but it could arguably increase due to the unintended bias caused by computers.

The overuse of artificial intelligence could also distort policing overall. If a certain neighborhood or street is flagged to Los Angeles police department as a “hot spot,” they are more likely to visit the area, and thus  more likely to use violence as they repeatedly come back to the scene. This increased violence is the exact opposite outcome of what is intended by using big data policing, but has proven likely to happen. Additionally, with an increased use of prediction policing and surveillance, there is more invasion of citizen’s privacy.

Overall, the use of big data policing and artificial intelligence in the police force seems good on the surface, but in reality could be doing more harm than good. Artificial intelligence can potentially help police obtain clear data on areas with higher crime and effectively combat it. However, AI should be used in strictly objective cases and should not decide whether or not someone will be flagged as a potential perpetrator of violence based off of their location and who they surround themselves with. There is still a need for human morals with AI in any field, especially where people’s criminal record are  on the line. It will be interesting to see a greater implementation of these programs across the country and if they will become more foolproof and fulfill their original intentions of cutting down on crime.

7 thoughts on “Big-Data Policing: Is it Helping or Hurting Society?

  1. Hey Elizabeth,
    Similar to you, I never considered the integration of Big Data in the law enforcement and legal system. On the surface, it would seem foolish to not use historical data to augment the efficiency of this sector. When every other sector of the government, healthcare, etc. are implementing data to become more efficient, it seems as though not using data would be a disadvantage. However, how do you balance this when data comes at the price of prejudice and safety? There seems to be a lot of potential to reconfigure this industry and find a way to use data to keep people safe, instead of labeling criminal “hot spots” before wrongs are committed.

    Liked by 1 person

  2. Hi, great post about a really interesting predicament. It really reminds me of the movie The Minority Report, which has a negative view on predictive crime-stopping. I’m also really bothered by the idea of stopping crime before it happens. Not even mentioning the whole philosophical discussion behind it and the idea of free-will, I can’t see this turning out well. 1, the programs will be biased because of the systematic biases that are already in place especially in crime-ridden areas where certain minority groups are statistically more at risk for crime. And 2, police who go to an area where they think there’s a crime will be looking for criminals, likely creating a confirmation bias when they see anything that remotely resembles a crime because they believe there is. This kind of technology definitely needs policing and more research to see what the social implications of this tech are and if it’ll perpetuate existing biases

    Liked by 1 person

  3. Really interesting post! I am really bothered by the idea of letting a black box determine who is likely to commit crimes. It reminds me of the TED talk a few weeks ago that Group B watched, which explained how AI was being used to determine risk of criminals reoffending, and the algorithm was showing clear bias towards African-Americans. The scariest part is that the creators of the system can’t even be sure why. I do think that using this technology to find people that could become criminals and offering them help, such as access to a social worker, is a really positive outcome however, as long as the tech is only being used to help, not hunt, people who could potentially commit crimes.

    Liked by 1 person

  4. Nice post. I agree that its not entirely clear what the value here is. Big data gets statistical trends, but crime is an individual act. I think it’s less likely that big data will solve crimes before they happen, but it could help better allocate resources at critical times.

    Liked by 1 person

  5. Great post, Elizabeth! I am glad that your concluding paragraph touched on whether big data is helpful or not. Before this class, I never thought about the biases of using AI and big data, and your post reiterates the point that AI is only helpful when it is unbiased (which may be hard to achieve!). The notion of computerized “hot spots” may be dangerous, since AI often looks at previous patterns to determine futuristic violence and crime. By only looking at data points and characteristics, a machine cannot view a person’s life story, which is why law enforcement should remain keen on the results that technology provides them.

    Liked by 1 person

  6. Interesting post. I really do think that the whole process requires a lot of scrutiny from the data gathering process. I don’t think the technology is inherently bad, it’s really what we feed the machines to learn and educate. I still think that our current data systems of crime reports and concentrations of where we believe criminals to be are not racially biased–creating algorithms and using data analytics from this existing data could potentially create a bigger problem for us. I personally believe that our current policing system is broken, slapping big data on top of it isn’t going to simply resolve the issue.

    Like

  7. Interesting post. I really do think that the whole process requires a lot of scrutiny from the data gathering process. I don’t think the technology is inherently bad, it’s really what we feed the machines to learn and educate. I still think that our current data systems of crime reports and concentrations of where we believe criminals to be can be racially biased–creating algorithms and using data analytics from this existing data could potentially create a bigger problem for us. I personally believe that our current policing system is broken, slapping big data on top of it isn’t going to simply resolve the issue.

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s