
Why “Good Guys” Shouldn’t Use AI like the “Bad Guys”: The Failure of Predictive Policing
This essay argues that predictive policing continues to fail not because police departments lack data, but because they are using the wrong kind of data, in the wrong way. Applying low-stakes commercial algorithms to high-stakes decisions can produce dangerous false positives, reinforce biased patterns, and erode public trust in policing. Using examples from Plainfield, NJ, and Chicago, the piece illustrates how predictive systems replicate past police behavior rather than accurately forecasting crime, thereby creating self-reinforcing feedback loops. It contrasts these failures with diagnostic approaches in Oakland and Richmond that utilize data to understand harm, guide outreach, and reduce violence without relying on algorithmic surveillance. The core argument is that policing needs better mirrors, not crystal balls.













