Cincinnati police should avoid the use of AI security robots | Opinion

If you haven’t heard of Knightscope yet, it may not be long before you are confronted with one of their creations on our streets.

In an age of technological acceleration, a growing overlap between artificial intelligence and police institutions is giving rise to new societal concerns. Towards the tail end of 2023, the Cincinnati Police Department began to demo "fully" autonomous security robots. As police departments across the country begin to employ the kinds of technology previously witnessed in dystopian fictions, it’s easy to feel apprehensive about the impact AI-driven robots may have on our communities.

The creator of these robots is Knightscope, who are said to have brainstormed their creation because of the 2013 Sandy Hook shootings. Their website states their mission is to "make the United States of America the safest country on the planet." Their creation, a 400-pound (notably, bullet-shaped) machine, uses AI to "classify" people and objects − reporting anomalies back to a manned command center.

Cincinnati’s police department would not be the first to use this technology, joining a growing list of institutions across the country employing Knightscope’s ADRs in efforts to reduce crime. New York Mayor Eric Adams proclaimed that these robots are "a good investment in taxpayer dollars," citing the benefits of these machines working "below minimum wage − no bathroom breaks, no meal breaks." Though, they do need to stop and charge themselves for 20 minutes every 2 hours.

New York City Mayor Eric Adams introduced the Knightscope 5, a robot that will patrol the Times Square subway station during a two-month trial, during a press conference with city officials on Sept. 22, 2023.
New York City Mayor Eric Adams introduced the Knightscope 5, a robot that will patrol the Times Square subway station during a two-month trial, during a press conference with city officials on Sept. 22, 2023.

In sociological discussions of surveillance, Michel Foucault’s theories on crime and surveillance are pertinent. Focusing on a prison design, the panopticon, which was arranged so that prison cells were always visible from a central control tower, Foucault (1975) illustrated that if individuals feel like they are being surveilled, they are likely to self-monitor and, therefore, self-discipline. He calls this disciplinary power.

This concept appears intrinsic to Knightscope’s premise: William Li, the CEO of Knightscope, explained: “Very simply, if I put a marked law enforcement vehicle in front of your home or your office, criminal behavior changes." It appears to work as intended: in one of their better-known deployments, Huntington Park, California, Knightscope reported a 46% reduction in crime reports and a 27% increase in arrests in the patrolled area. But at what cost?

It is of tantamount importance to consider that the data we feed AI is biased, and, therefore, may teach it to reproduce and reinforce inequalities. It is well-known and well-documented that there are racial and ethnic disparities in the criminal justice system, with Black and Latinx people forming 30% of the U.S. population but accounting for 51% of the jail population.

Given that AI-powered machinery is more-often-than-not trained with predominantly white male databases, this new technology is unlikely to resolve the issue. In fact, research shows that policing algorithms are often biased against Black defendants. Given the police’s history of corruption, AI-powered systems embedded with the same biases as their human counterparts are unlikely to be the solution.

Additionally, these robots have been shown to over-police houseless populations, with one K5 ADR in San Franscisco being dubbed the "anti-homeless robot" for antagonizing those living on the streets. "Digidog," a different surveillance robot previously deployed by the NYPD, was said to be "emblematic of police aggressiveness when dealing with poor communities."

Furthermore, the use of automated technology reduces accountability and complicates questions of responsibility. Hope Reese argues that as AI systems become "more autonomous and inscrutable, the accountability gap for constitutional violations threatens to become broader and deeper." With the police losing status and respect, the use of AI-powered robots such as K5 allows them to transfer antipathies to the robots and shift blame away from themselves.

Fortunately, it appears that many police departments and security agencies are now ridding themselves of AI-powered predictive policing methods, with the Chicago Police Department suspending their use of predictive policing in early 2020. The NYPD has also recently removed their K5 robot from the Times Square subway, after being forced to assign officers to escort it to prevent passersby from abusing it. That, and it couldn’t use the stairs.

While there appear to be benefits to these machines, the results are dubious. Additionally, many – including the police officers themselves – have no idea about the technology being used in these machines and what their ramifications are. The risk of disproportionately affecting Black, Latinx and poor communities – an already huge problem – is not worth it.

I implore that the Cincinnati Police Department, among others, takes these issues seriously and abandons their experiments with AI systems. As Reese emphasizes, "it could mean the difference between putting an innocent or guilty person behind bars."

Izzy Jeavons is a student of PhD Sociology at the University of Cincinnati, whose work focuses on technological inequalities.

Izzy Jeavons
Izzy Jeavons

This article originally appeared on Cincinnati Enquirer: Cincinnati should abandon experiments with AI security robots