Fighting Pre-Crime?: Law Enforcement, Artificial Intelligence, and Predictive Policing Technology
By: Aaron Spitler*
View/Download PDF Version: Fighting Pre-Crime? Law Enforcement, AI, and Predictive Policing Technology (Spitler)
For law enforcement agencies (LFAs), the allure of artificial intelligence (AI) is hard to resist. Vendors of AI-powered products have pitched them to police departments by emphasizing how this software can help stop crime in its tracks. The most recent version of Gotham,[1] a data analysis platform created by tech giant Palantir, has been sold to LFAs with assurances that it can pinpoint potential crime locations. Other companies[2] have found success marketing solutions that identify individuals who may be suspects in criminal investigations, leveraging AI to synthesize information on persons of interest. Regardless of the application, companies in this space have made clear that their AI-enhanced technologies could sabotage criminals attempting to evade the law. As a result, LFAs have paid close attention to what AI can do for them.
This strategy for “predictive policing,” where AI is used to analyze data sources such as arrest records and social media posts to anticipate potential crimes (and criminals), is not without its critics. Many charge that this approach blatantly flouts an individual’s right to privacy, placing those who have not perpetrated any crimes under unwarranted and disruptive surveillance. Issues with AI-enabled predictive policing are not limited to how solutions are deployed. Some problems can be traced to the biased data supplied to systems; their outputs can be used to justify over-policing in communities that have been treated unfairly in the past. Measures should be adopted to ensure transparency and accountability in how LFAs employ AI for policing. Otherwise, their unregulated use may erode civil liberties in the name of public safety.
Undermining Privacy Rights
In principle, predictive policing allows LFAs to monitor would-be criminals before they can act. However, in reality, evidence shows that these tactics have been used by the police to harass and intimidate individuals who have done nothing wrong. A 2021 Brookings report[3] highlighted this trend, citing a case in Florida where a minor was hounded by law enforcement due to an algorithm concluding that they were likely to break the law. Analyzing data points, including school records, the “intelligence-led”[4] program determined that the young man stood as a potential threat, even though he had not committed a serious offense. Armed with this information, officers began visiting his parents’ home without warning to question him, occasionally appearing multiple times a day. After enduring this intimidation campaign, the minor and his family decided to move out of their community. This episode not only underlines the faultiness of predictive tools but also how their misuse infringes upon the freedom from interference that civilians expect.
The sensitive nature of the information amassed by AI-powered predictive policing systems also deserves attention. Products used by LFAs can synthesize disparate data sources to provide a fuller picture of a person’s habits and connections. Researchers with the Brennan Center for Justice[5] outlined how these solutions can be abused by the police, as officers are granted unprecedented access to a person’s private life as part of their formal investigations. Drawing on data gleaned from sources such as vehicle registration forms and social media posts, LFAs can use information compiled by these technologies as they see fit, often without any mechanism for oversight. For individuals who have not violated the law, yet find themselves under surveillance, the glimpse into their day-to-day routines offered by these products can be chilling. With their privacy compromised, civilians affected by these systems may be forced to think twice about what they do and even what they say.
Reinforcing Entrenched Biases
Opponents of AI-powered predictive policing tools are not only concerned about how they are deployed. They also take issue with how they are developed. Many have emphasized that data fed to these algorithmic systems is often rife with biases that adversely affect minority communities. In a 2020 piece, the MIT Technology Review[6] unpacked how the data sets these tools rely on reflect the discriminatory over-policing of non-white communities across decades. As a result, system predictions simply replicate long-held prejudices about “bad neighborhoods.” Police then use this information to justify patrolling historically marginalized communities, navigating these spaces with the assumption that residents are more likely to be criminals. This dynamic underscores how misconceptions from the past inform the administration of justice in the present when police turn to these technologies. Whether LFAs who deploy these systems have acted to “correct the record,” potentially by reexamining the data integrated by these products, remains largely unknown.
Tools for predictive policing do more than regurgitate stereotypes about who is presumed to be a criminal and where they are likely to be found. They also have the effect of “digitizing” outmoded ideas about criminality that have taken generations to uproot. Wired[7] explored the inherent contradictions of these technologies, noting how tools designed to anticipate where incidents may occur make these determinations by processing flawed and unreliable data from years prior. LFAs have defended the use of these solutions by touting that they are purpose-built to provide objective recommendations on how best to leverage personnel and resources. Yet when considering that the data these systems require is skewed against certain groups, the trust placed in predictive policing to serve and protect all communities appears misplaced. While selling their solutions to improve the efficacy of policing, developers of these data-driven tools have created products where discrimination is a feature, not a bug.
Watching The Watchers
The drawbacks presented by powerful, but problematic, technologies for predictive policing can be summarized in a single phrase: all that glitters is not gold. Tools adopted by LFAs can be misused in ways that flagrantly disregard the privacy of civilians, all while hoarding sensitive information on individuals who have not run afoul of the law. They also leverage data sets on marginalized communities that are inaccurate at best and discriminatory at worst, further cementing deep-seated stereotypes about who poses a serious threat to public order. Weighing these factors, it may be reasonable to conclude that predictive policing systems have lost their luster in the eyes of many LFAs. Yet police around the world continue to purchase these products. From Argentina[8] to Germany,[9] LFAs remain captivated by the outward promise of predictive policing technologies, while discounting the legitimate dangers they pose. Civilians will, as a result, suffer the consequences of these decisions.
Policymakers must rein in the deployment of these technologies and erect guardrails that uphold civilians’ rights, irrespective of their backgrounds. Advocates for regulation argue that safeguards must have transparency and accountability as their lodestar. For instance, the city of San Jose, California adopted AI principles[10] that strictly govern how AI is used across departments, including those tasked with enforcing the law. Guidelines like these can be valuable for gauging the effectiveness of police systems and assessing whether these products have had a negative impact on civilians’ lives. Officials, along with LFA representatives, could also circulate information about the algorithms powering predictive policing tools. This may provide an opportunity for communities that have been historically over-policed to work together to expose biases in data sets. Technologies like predictive policing tools will be embraced by LFAs given the nature of their work, yet steps can be taken to ensure that their use benefits all people.
* Aaron Spitler is a researcher specializing in digital technologies and human rights. He has worked with organizations exploring these issues, including the International Telecommunication Union and Harvard University’s Berkman Klein Center for Internet & Society.
[1] Palantir Gotham Europe, Palantir (last visited Jan. 12, 2026), https://www.palantir.com/platforms/gotham/europa/.
[2] SoundThinking Unveils CrimeTracer Gen3: Expanding from Investigations to Agency-Wide Crime Data Solution, SoundThinking (Oct. 17, 2025), https://ir.soundthinking.com/news-events/press-releases/detail/324/soundthinking-unveils-crimetracer-gen3-expanding-from.
[3] Ángel Díaz, Data-driven policing’s threat to our constitutional rights, Brookings (Sept. 13, 2021), https://www.brookings.edu/articles/data-driven-policings-threat-to-our-constitutional-rights/.
[4] Pasco Sheriff’s Office Intelligence-Led Policing Manual, Pasco Cnty. Sheriff’s Off. (last visited Jan. 12, 2026), https://embed.documentcloud.org/documents/20412738-ilp_manual012918.
[5] Rachel Levinson-Waldman and Ivey Dyson, The Dangers of Unregulated AI in Policing, Brennan Ctr. For Justice (Nov. 20, 2025), https://www.brennancenter.org/our-work/research-reports/dangers-unregulated-ai-policing.
[6] Will Douglas Heaven, Predictive policing algorithms are racist. They need to be dismantled., MIT Tech. Rev. (Jul. 17, 2020), https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/.
[7] Chris Gilliard, Crime Prediction Keeps Society Stuck in the Past, Wired (Jan. 2, 2022), https://www.wired.com/story/crime-prediction-racist-history/.
[8] Harriet Barber, Argentina will use AI to ‘predict future crimes’ but experts worry for citizens’ rights, The Guardian (Aug. 1, 2024), https://www.theguardian.com/world/article/2024/aug/01/argentina-ai-predicting-future-crimes-citizen-rights.
[9] Marcel Fürstenau, German police expands use of Palantir surveillance software, Deutsche Welle (Aug. 4, 2025), https://www.dw.com/en/german-police-expands-use-of-palantir-surveillance-software/a-73497117.
[10] Maria Lungu, Predictive policing AI is on the rise—making it accountable to the public could curb its harmful effects, The Conversation (May 6, 2025), https://theconversation.com/predictive-policing-ai-is-on-the-rise-making-it-accountable-to-the-public-could-curb-its-harmful-effects-254185.
