Predictive policing in Pittsburgh

This past June, it was announced in a Carnegie Mellon University press release that a fire prevention algorithm developed by Michael Madaio, a Ph.D. student in the Human-Computer Interaction Institute, would be adopted by the City of Pittsburgh. However, the algorithms’ potential did not stop there for city officials. They hoped similar predictive models could be applied to policing.
As for the success of the fire prevention tool alone, optimism for its potential is far from irrational: when the predictive software analyzed 22,000 Pittsburgh properties, it returned 57 properties that were considered high risk. Of those 57 properties, “50 experienced a fire incident of some type within a year of being tagged by the system,” the previously mentioned press release notes.
In a Heinz College Metro21 Smart Cities Institute press release, Madaio spoke of the future of the tool, saying, “the [Pittsburgh] fire department is already using this...we are partnering with fire departments from several other cities, including Dallas, to help them adapt the model to their cities.”
This August, Mayor Bill Peduto said to Pittsburgh’s WTAE Action News, in reaction to this success, “the question was asked of [Carnegie Mellon researchers], what type of technology can be utilized in looking at crime and how can that help us through predictive analytics to identify where the next crime will occur so we can stop to before it happens.” What Peduto referred to is something known as predictive policing, utilizing algorithms to try to prevent crime by knowing what areas to prioritize as a potential threat.
This application of algorithmic policing is often considered controversial due to its potential to reinforce preexisting biases in policing, rather than remove them. Madaio, whose fire prevention tool inspired Mayor Peduto to meet with the Pittsburgh Chief of Police and Carnegie Mellon representatives to discuss implementing such predictive policing measures, disclosed in an interview with The Tartan that he has “serious misgivings about the ethical risks of using predictive policing algorithms.”
Madaio continued, “One major risk is that these algorithms are often trained on biased data from chronically over-policed neighborhoods and populations, and may have an inequitable impact, especially on low-income, minority populations. Given all the research showing the potential for predictive policing to amplify existing inequities, I would be concerned about the City of Pittsburgh adopting predictive policing technologies,” citing a Royal Statistical Society study titled “To Predict and Serve?”
Despite this sentiment from the leading researcher of the fire prevention tool, the City of Pittsburgh and the Metro21 Institute have followed through with their efforts on predictive policing: this January, it was announced in a Carnegie Mellon University press release that “a neural network model [that] predicts locations that will likely have crime flare-ups in the following week” had been “built, rolled out to all Pittsburgh Police, and [was] in its evaluation period using experimental controls.”
Considering the issue of over-reliance on algorithms in public policy, Madaio said that his research team and him spoke with Mayor Peduto during the roll-out of the fire prevention tool to give their insights on the ethics of machine learning and “advocated for conducting transparent audits of the equitable impact of machine learning systems,” as told to The Tartan.
The fire tool itself, which has been made open source and available for both application across the country and for “public transparency and accountability” has already made a significant impact, Madaio describes: “In San Francisco, the local Code for America brigade has adapted our model using their own data, and our team is in discussions with data scientists from several fire departments on how to best adapt the fire risk model for other cities.”
The goal of increased public safety could certainly be achieved with the fire prevention tool, but the public benefits of things like predictive policing are, at least according to many data scientists, machine learning researchers, and Madaio himself, less clear.
In The Tartan’s conversation with Madaio, he concluded that “Ultimately, I would hope the fire risk analytics project would contribute not only to public safety in Pittsburgh, but also to the larger conversation around fairness, accountability, and transparency in machine learning, especially as part of civic life.”
Transparency, accountability, and fairness may be characteristic of Madaio’s open source fire prevention tool, but of the proprietary predictive policing algorithms currently deployed in Pittsburgh, the same cannot be said.