By Joseph Scanlon*
Facial recognition technology is becoming an increasingly prominent part of everyday life. Used for purposes as useful and benign as unlocking a phone or sorting digital photos, facial recognition can also be utilized in a much more sinister manner, such as for government surveillance. Increasingly, facial recognition technology has maneuvered its way into the realm of government and law enforcement, where its benefits and drawbacks are magnified.
Government embrace of facial recognition technology varies throughout the world. In China, for example, law enforcement uses notoriously intrusive high-definition cameras to monitor citizens. Alarmingly, in the Xinjiang province, facial recognition algorithms are even used to identify and persecute individuals from religious minorities. In Europe, on the other hand, there is a growing movement to ban police use of facial recognition technology, although the practice is still used in a number of countries. As for the United States, jurisdictions are divided on whether the merits of facial recognition are worth the costs to individual liberty. Some states, like Florida and New York, have been utilizing the technology for nearly a decade, while others, like California, Oregon, and New Hampshire, have banned it. Nonetheless, recent research suggests up to half of American adults are already in a law enforcement facial recognition database.
Proponents for police use of facial recognition laud the technology for its ability to aid law enforcement via suspect identification and monitoring known criminals. Even if we accept that these gains are realized, however, the costs associated with the technology are enormous. Opponents of facial recognition cite the obvious privacy concerns and threats to free speech, along with fears of false positives (i.e., an innocent person is falsely identified) and negatives (i.e., a guilty person is not identified). Even worse, however, is that the technology is inherently biased against Black and Brown people, and these vulnerable communities will suffer the most severe consequences if the technology is unleashed on a large scale.
How is Facial Recognition Used by Police and Does it Work?
Police contend that facial recognition makes the public safer. Image databases can be developed by scraping millions of images from social media profiles and driver’s licenses, often without consent. The facial recognition software then runs facial analyses of images captured through video surveillance or other mediums to search for potential matches against the database.
Experts say that high image quality is the key to optimizing use of facial recognition. Unfortunately, unidentified suspects are often recorded on grainy or poor quality surveillance footage or photographs, which yields inaccurate and often unusable results. Indeed, research suggests that current facial recognition technology isn’t yet the law enforcement game-changer envisioned by proponents. In Florida, for example, the technology was implemented years ago and originally sought to include real-time video streams, but it was scaled back after lackluster results. Today, officials still query the system 4,600 times a month. Of these searches, however, only a small percentage help identify unknown suspects. Likewise, the shortcomings of facial recognition software were experienced firsthand by London police, whose attempts to identify individuals in a crowd with the technology were correct less than 20% of the time.
Use of the technology hasn’t been all bad news, however, as one-to-one verification of individuals, such as recognizing the rightful owner of a passport or smartphone, has become extremely accurate. Additionally, facial recognition technology has proven useful in solving crimes like check forgery and identity fraud. These limited uses provide some support that the technology can be utilized effectively in limited circumstances, but there remain significant dangers to unleashing broadly as a crime solving tool.
The Dangers of Facial Recognition
Unimpressive field results are not the only drawback of the invasive technology. An influential 2018 study found that leading facial-recognition software was much worse at identifying women and people of color than it was at identifying white, male faces. People of color are 10 to 100 times more likely than white males to be misidentified by facial recognition software. Additionally, government data shows that facial recognition software used by police produces more false positives when used on images of Black women. Misidentifications have real world consequences, and people of color are wrongfully targeted, arrested, or detained by police frequently. In one recent example, Detroit police’s use of face recognition led to the wrongful arrest of Robert Williams, a Black man arrested in his home in front of family.
Part of the blame for these outcomes rests in the racist roots of American policing. Even today, Black Americans are more likely to be arrested for minor crimes than their white counterparts, which leads to overrepresentation in police mugshot data. In turn, mugshot photos are used to feed facial recognition databases and subsequently identify suspects of more serious crimes. More egregiously, some cities disproportionately implement facial recognition technology in majority-Black areas, leading to even more misidentifications.
Additionally, the software is created by humans, which means that human biases are built into it. Although many of these biases are likely implicit, one software developer noted that a facial recognition engineer could purposely “make sure that . . . [certain people] were unrecognizable and . . . [others] were misidentified as criminals.” A study conducted by MIT lent credence to this statement, finding that a variety of different facial recognition softwares each exhibited racial biases, and struggled to accurately identify females and people of color. Unsurprisingly, another study found that the softwares worked best at recognizing middle-aged White men.
What Should Be Done About Facial Recognition?
Regulation of facial recognition technology is sparse, and legislatures have not kept up with its explosive growth in recent years. Left unchecked, the technology will further exacerbate systemic racism and disproportionately impact people of color, who are already subject to discrimination and human rights violations by police. These concerns have not gone unrecognized, and many advocacy groups have pushed for either a suspension of the technology until it can be further researched and refined, or an outright ban. Some have called on Congress to take action against government use of facial recognition technology, and several senators have introduced legislation intended to get the ball rolling.
Nonetheless, the path forward for government and police use of facial recognition technology is unclear. At a minimum, it seems certain that regulation is necessary. The private sector, namely tech giants, are currently lobbying for a weak regulatory landscape. Others advocate for looser regulations regarding the technology’s commercial use (e.g., by requiring user consent), but much stricter regulation for government use. Indeed, many state laws passed recently target government entities rather than those engaged commercially.
Still, others argue that regulation is not sufficient, and the only way to ensure protection of vulnerable communities is through an outright ban. Proponents of a ban argue that the technology is simply not reliable enough, and its minimal benefits come nowhere near justifying the costs. Even as the technology improves, however, many still feel a ban is necessary, because even the most accurate technology can still be used to propagate discriminatory police practices, and mere technical standards or regulations cannot prevent the technology from being utilized in nefarious ways. Given the seemingly never ending stream of police brutality that plagues American streets, calls for an outright ban are legitimate and persuasive. The unfortunate truth is that until there is a meaningful change to the way the police in this country operate, they simply cannot be trusted with such a powerful tool.
* JLI Online Editor, J.D. Candidate – University of Minnesota Law School, Class of 2023