NYC subways join airports, police in using AI surveillance. Privacy experts are worried.

New York City’s subway system is the latest to implement artificial intelligence-powered surveillance, following an increase of similar software use in airports and police stations across the country.

The Metropolitan Transit Authority, the agency that operates the city’s public transportation, quietly rolled out a third-party technology to help crack down on fare evaders, NBC reported.

The new policy comes weeks after the Transportation Security Administration announced an expansion of facial recognition software use at over 400 airports. Police in Westchester County, a suburb outside of NYC, also recently revealed that they used AI to scan vehicle license plates and examine the driving patterns of vehicles.

What is AI software used for?

The MTA software was deployed to track fare evaders and is in use at seven subway stations, according to a report published by MTA in May. MTA plans to implement the software in “approximately two dozen more stations, with more to follow” by the end of the year.

“The MTA uses this tool to quantify the amount of fare evasion without identifying fare evaders,” Joana Flores, a spokesperson for MTA, told USA TODAY.

The AI software is used to count the number of unpaid entries into subway stations, according to the agency's May report. From there, “an evasion rate can then be calculated by comparing the number of unpaid entries to the number of paid entries,” the report states.

The data will be used to “cross-check” with systemwide estimates of fare evasion. The software– created by AWAIIT, a Spanish company – uses surveillance cameras to scan travelers and send pictures of potential fare evaders to nearby station agents, as shown in a promotional video.

“The MTA thus will develop – for the first time – a much increased ability to pinpoint evasion spikes by station, by day of week, and by time of day,” the MTA report said. “With the technology providing reliable before and after evasion counts, it will be increasingly possible to test new approaches in search of what really works.”

Data privacy experts raise alarms on AI use

MTA reported $690 million in revenue losses because of fare evasion in 2022.

The new policy has drawn many critics. Data and privacy experts said MTA’s new initiative doesn't address the underlying problem that causes fare evasion, which is related to poverty and access.

Instead, the program tries “to use technology to solve a problem in a way that is more or less a Band-Aid,” said Jeramie Scott, senior counsel and director of the Electronic Privacy Information Center (EPIC), a Washington, D.C.-based advocacy, litigation and research center.

Caitlin Seeley George, campaigns and managing director for Fight for the Future, a Massachusetts-based digital rights nonprofit, said she is concerned about policies that contribute to a culture of mass surveillance.

“Facial recognition technology makes it so that the concept of privacy is moot because people’s every movement can be tracked and watched,” George said.

Artificial intelligence software is being used at seven subway stations in New York City to track fare evaders, according to the Metropolitan Transit Authority.
Artificial intelligence software is being used at seven subway stations in New York City to track fare evaders, according to the Metropolitan Transit Authority.

AI becoming a more popular security tool

The use of AI for surveillance and identity detection has risen over the past decade. Although there are no direct federal regulations on the use of AI, about two dozen state or local governments across the United States passed laws restricting or banning facial recognition from 2019 to 2021.

However, certain cities and states, including California, Virginia and New Orleans have since reversed these restrictions over the past year.

Earlier this month, the TSA announced it will be expanding its 25-airport pilot program of facial recognition software to 430 airports across the country over the next several years following an “extremely promising” pilot.

The pilot program utilizes one-to-one matching, which means that a passenger’s picture is only compared to their government-issued ID, including a driver's license or passport.

The TSA facial recognition program is voluntary, and travelers are allowed to opt out in favor of an alternative verification process, a TSA spokesperson told USA TODAY. Data captured by the software is not stored.

"TSA is committed to protecting passenger privacy, civil rights, and civil liberties and ensuring the public’s trust as it seeks to improve the passenger experience through its exploration of identity verification technologies," a TSA spokesperson wrote in a statement to USA TODAY.

Scott, of the Electronic Privacy Information Center, said he fears facial recognition technology use in airports will become mandatory, given that TSA Administrator David Pekoske said that would be the goal during a South by Southwest fireside chat in April.

“I know when I submitted a passport application, I did so to obtain a passport– not for the State Department to retain my photo and then use it for facial recognition,” Scott said. “Taking information provided for one purpose and then using it for a secondary purpose – that is what AI is.”

Westchester County police utilized a database of about 1.6 billion New York license plates to monitor car traffic patterns that exhibited patterns of illegal activity, leading to the arrest of a Massachusetts man for drug trafficking, according to news reports.

In Miami, police confirmed to the BBC that the department uses Clearview, an AI software that allows law enforcement customers to upload a face and match it against billions of images it has collected. Officers use the software about 450 times a year for every type of crime, Miami police told the BBC. Clearview said it had run nearly a million searches for United States police.

Early this month, TSA announced an expansion to their facial recognition pilot program and will now be utilizing the software at 430 airports across the country.
Early this month, TSA announced an expansion to their facial recognition pilot program and will now be utilizing the software at 430 airports across the country.

Use of AI sparks discrimination concerns

The use of AI by national agencies is particularly troubling because the software is flawed and often discriminates against people of color, said Albert Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit legal group based in New York City that advocates for privacy rights.

Facial recognition software has been found to be inaccurate in 98% of cases. When used by law enforcement, the software is often unable to distinguish darker faces, leading to racial profiling and false arrests, according to a 2016 study.

A 2018 study by the Massachusetts Institute of Technology also found commercial AI systems had an error rate of 34.7% for dark-skinned women, compared to 0.8% for light-skinned men.

“This technology would be creepy if it worked perfectly, but it’s even more disturbing that it’s been shown to be discriminatory,” Cahn said.

George, of Fight for the Future, also expressed concern that AI use in NYC’s subway stations will lead to mass surveillance and tracking of people’s movements.

“The MTA has said that they’re only using it to count people evading fare but the fact that this system is in place just opens up the possibility that it could be used on all travelers and could become a broader tool of surveillance and policing,” George said.

In March, a group of House and Senate lawmakers reintroduced the Facial Recognition and Biometric Technology Moratorium Act, which would stop the federal government’s use of facial recognition technologies. Data privacy experts have called for federal regulations and oversight of AI to ensure that people’s data is protected and not misused by companies.

“We just can’t predict every way that AI may be used to leverage the information that the federal government already has and there’s a lack of protections in place to prevent the federal government from doing that,” Scott said.

This article originally appeared on USA TODAY: NYC and other officials use AI to monitor crime, citizens