Predictive policing: the danger of knowing where there will be more crime | Technology
is the headline of the news that the author of WTM News has collected this article. Stay tuned to WTM News to stay up to date with the latest news on this topic. We ask you to follow us on social networks.
Knowing where a crime is going to be committed before it happens is the dream of any police department. Data scientists and AI experts want to make it happen. Many security forces, especially in the US, a country where armed murders are the order of the day, have been working for years with automated systems that process extensive databases to detect patterns that allow predicting crime hotspots . A group of researchers from the University of Chicago led by Professor Victor Rotaru has developed a model capable of predicting where there will be more crime a week in advance. The tool is correct in 90% of cases, which makes it stand out among the various examples of so-called predictive policing, present in cities such as Los Angeles or New York and operated by companies such as PredPol, Azavea and KeyStats.
His system, featured in an article in the journal Nature Human Behavior, is designed for urban environments. The model was trained with historical data on violent and property crimes from the city of Chicago, Illinois, between 2014 and 2016. After processing this information, an attempt was made to anticipate the areas with the highest levels of crime in the weeks following the period. of tests. The tool predicted the probability that certain crimes, such as homicides, assaults, assaults and robberies, would occur in 300-square-meter sections of the city. It did so with 90% reliability. Subsequently, the model was tested in seven other large cities in the country (Atlanta, Austin, Detroit, Los Angeles, Philadelphia, San Francisco and Portland) with similar results.
But this type of algorithmic system, no matter how sophisticated, fails to solve a central problem: how to try not to penalize the most disadvantaged neighborhoods, which in the case of the North American country are mostly populated by black and Latino neighbors. This is a perverse effect widely accredited by the scientific literature. An investigation presented last year concluded, in fact, that it is impossible for predictive policing systems to counteract their biases. That is one of the reasons why the European Parliament has called for these tools to be banned in the EU.
The team of researchers that signed the study published in Nature Human Behavior is aware of it. “In an attempt to prevent the use of their tool from being detrimental to some groups, the authors turn the concept of predictive policing on its head and prescribe that their models be used to monitor police work itself,” says Andrew V. Papachristos , researcher at the Department of Sociology at Northwestern University (Evanston).
Towards ‘precrime’ departments
The academic, who reviewed the article by Rotaru and his colleagues, believes that the nuance is important and can help develop early intervention systems and other efforts to identify police abuse, a particularly sensitive issue in the country since the death of Jorge Floyd in 2020. He also believes that it can help “send social workers, response teams, and victim assistance teams to those 300-square-meter squares where it has been detected that there will be more disputes.”
Suppose a predictive tool such as the one devised by Rotaru’s team concludes that there is a very good chance that a crime will be committed on a given block in Chicago three days from now. What should be done with that information? Should we activate a response from the authorities? If so, what kind of actions should be carried out and by whom? All these questions, Papachristos points out, are as important or more so than the prediction itself.
“One of our central concerns in putting together this study was its potential to be misused. More important than making good predictions is how these are going to be used. Sending police to an area is not the optimal outcome in all cases, and it can happen that good predictions (and intentions) lead to over-surveillance or police abuse,” the authors write in the article. “Our results can be misinterpreted as saying there are too many police in a low-crime area, which will typically be in predominantly white communities, and too few in higher-crime ones, where there tends to be more cultural and ethnic diversity,” they add.
“We conceived of our model basically as a police policy optimization tool,” says Ishanu Chattopardhayay, Rotaru’s colleague at the University of Chicago and co-author of the article. “That’s why we insist so much that you have to be very careful in how you apply the knowledge it brings,” he adds. In his opinion, his algorithm can be used to analyze current police practices and see, for example, if too many resources are being devoted to neighborhoods where, after all, there is not as much crime as in others.
Overcriminalization of Blacks and Latinos
The fact that Rotaru and colleagues’ tool has been tested in Chicago is significant. One of the first predictive police systems on record was the one that was launched precisely in that city in 2013. The tool tried to identify potential criminals by analyzing arrest data, both from the alleged perpetrators of the crimes and of their victims, and crossing them with their networks of personal relationships. It was a resounding failure: it did not help crime to fall and it was shown that the black population was overrepresented on the lists. 56% of black men between the ages of 20 and 29 were listed in them, according to an independent study a few years later.
In large American cities, there are neighborhoods that are often closely associated with a particular race. “Disproportionate policing in communities of color can contribute to biases in incident records, which can cause them to propagate to inferred models,” acknowledge the authors of the article, for whom there is no way to statistically control for these deviations. “Whoever uses this type of tool must be aware of it,” they stress.
A researcher from another American university consulted by EL PAÍS questions the results of the study by Rotaru and his colleagues. In her opinion, the high effectiveness of the model (90%) decreases notably when the crimes analyzed are rare. Another but that she adds to the model is that the predictive algorithm will not be able to detect the areas with the highest crime, but rather with the highest reported crime.
That difference is important, because in the US black communities are less likely to report crimes to the police. Rotaru and his team point out that they intend to correct this bias by taking into account only crimes that, by their nature (assaults, murders, theft of property), are usually reported.
“Our tool is based on recorded crimes. We can’t model on unreported crimes,” explains Chattopardhayay of Rotaru’s team.
The unequal relationship with the security forces based on race is one of the input biases faced by algorithms applied to police work. They are not the only ones. As the databases they work with are usually arrests, the areas with the most arrests are the ones that the machine associates with the most need for patrols, which in turn increases the number of arrests. Hence the protests of many civil society associations over the widespread use of these tools and the precautions of Rotaru and his colleagues with his own work.
You can follow THE COUNTRY TECHNOLOGY in Facebook Y Twitter or sign up here to receive our weekly newsletter.
50% off
Exclusive content for subscribers
read without limits