The A.I. Surveillance Companies That Say They Can Thwart Mass Shootings and Suicides

Our world has long been filled with cameras peering out over streets, malls, and schools. Many have been recording for years. But for the most part, no one ever looks at the footage. These little devices, perched on shelves and poles, exist primarily to create a record. If something happens and someone wants to learn more, they can go back.

Property managers can set up real-time alerts, but they’ll soon tire of false alarms. Because many devices are simply detecting motion—as represented by a change in pixels— they cannot differentiate between an intruder waving a gun and a squirrel chasing an acorn.

Due to advances in artificial intelligence, the point of the security camera is undergoing a radical transformation. Over the past few years, a growing number of buzzy startups and long-standing security-camera companies have begun offering customers—ranging from Fortune 500 companies to corner markets—abilities long limited primarily to billion-dollar border surveillance systems. Their capacities range dramatically. But they’re all way past motion detection. Unlike their predecessors, they most definitely know the difference between a person and a car.

More significantly, many of these systems promise to instantaneously flag or even predict certain types of activity based on what a person is holding, whether a face seems to match a photo of a specific individual, and other clues flagged by A.I. tools predicting the likelihood of “suspiciousness.”

In other words, millions of cameras in public and private spaces throughout the country are currently pivoting from documentary receptacles into digital security guards. And because most of the physical cameras were already there, this shift may pass largely unnoticed. You’ve probably missed much of what’s happened already.

Some of the use cases are quite compelling. South Korea has one of the highest suicide rates in the world. There were already security people to watch CCTV cameras along the 300-mile-long Han River. But they were missing people. And so, around two years ago, researchers at the Seoul Institute of Technology began to train an A.I. tool on existing videos so that it would learn “the pattern of those likely to jump off,” then “detect and forecast hazardous situations,” officials announced in 2021. The program was so successful that the city soon expanded it to other bridges, according to a Japanese news report from a year later. (It’s not clear how often it has prevented someone from jumping, but it seems to have reduced rescue times.)

I was not able to find a similar use case in the U.S. I did learn, however, that a related technology is being used to prevent an HR subcontractor from overbilling. Elsewhere, A.I. companies are promising no less than using the technology to detect and help stop school shootings.

Mike Haldas, the co-founder of CCTV Camera Pros, an online vendor of security cameras based in Florida, first rolled out the Viewtron A.I. camera two years ago. It is now one of his bestsellers. At first, the selling point was pretty basic: Unlike motion-detected cameras, which might be triggered by a tree blowing in the wind, these systems could identify a person.

But as the cameras have gotten even better, Haldas’ clients now want to solve their problems with facial recognition. It’s possible “the A.I. can actually detect faces and match against a database and then take some type of alarm action,” he said. To be clear, this is not Clearview-style facial recognition; that company created a notorious privacy-invading tool that makes it possible to identify people by comparing their face with a database scraped from social media. Rather, customers must provide their own reference images. This might include photos of a company’s employee or a person a store believes stole from them.

Haldas has been surprised by all the interest in verifying “attendance.” One client, for example, explained to Haldas that his company had been outsourcing its human resource operation. The HR people were already checking in with their faces, but they often claimed that the kiosk failed. He suspected that they were overbilling, so he purchased the new video system. “The problem is solved,” Haldas said.

Haldas also recently rolled out loitering and crowd-density features, which have prompted similar enthusiasm. One homeowners association told him that it is using it to catch teenagers gathering in the pool house after hours.

Some of the current uses of this formerly cutting-edge technology may seem comically banal. But we’re in the early stages of the trickle-down phase and we should be wary of where it may lead, says Beryl Lipton, an investigative researcher at the Electronic Frontier Foundation. She first became aware of A.I. integration into video at borders and at stadiums maybe two years ago. Quickly, it became clear that companies were interested in doing far more than thwarting terrorist acts and verifying immigration status. Radio City Music Hall, for example, stands accused of using its facial recognition security system to block an attorney, involved in a case against its parent company, from attending a Rockettes show with her daughter’s Girl Scout troop.

As expectations about what these products should detect increase, so too do the opportunities to let them misguide us and cause us to punish or profile innocent people. “Is this a person who is being hit, or are they just going in for a hug?” Lipton asked. Haldas’ systems do not offer property managers the ability to flag, record, or call police based on a hit or hug, but other companies, using more advanced computer vision, do.

Ambient.ai emerged from stealth mode last year with $52 million in funding and the tag line “From Reactive to Proactive.” The San Jose–based company is already working with airports, data centers, art museums, schools, utility companies, and corporate campuses across the world, two team members told me in an interview. Unlike CCTV Camera Pros, the company emphasizes that it will never use any kind of facial recognition. Rather, it’s created a system that works in collaboration with an existing video system to alert security teams to the appearance of certain objects—like a gun or knife—and scenarios, like a person entering the door without scanning a badge. Ambient.ai said that the company has already been able to prevent the theft of “very high value physical inventory” and “IP-related assets.”

Ambient.ai is among the emerging security companies marketing themselves as a way to thwart active shooter events. To prepare their system to do this, they train it on actual and re-created active shooter videos, images of relevant objects like a gun or backpack, and footage of the school or corporate campus. Over the past year, their tool has become less reliant on humans for training, said Benjamin Goeing, who works on the product side at Ambient.ai. Their system is becoming increasingly capable of deducing what is and is not alert-worthy on its own by using context clues, he said.

To show me what on earth he was talking about, Goeing provided this example. Here you have two men writing on a flat surface. Their A.I. platform knows to flag the situation on the right as suspicious, even without ever training on graffiti videos he said.

On the left, a man writing on a whiteboard. On the right, a man tagging a garage door.
Ambient.ai

Rather, the system would consider other clues, including 1) the time of day, 2) the fact that a building wall is not typically used for writing, 3) the absence of a paint bucket, and 4) previous footage at that location.

“So, you put all these things together, and then you arrive at the conclusion that this is a suspicious activity,” Goeing said. If the system is wrong—and all that person is actually doing is painting a company logo—it should not, in theory, be a problem. Hopefully, all that happens is that someone on a security team reviews it and moves on.

But it’s not difficult to imagine how a machine-generated assessment of suspiciousness could still go wrong—if not next week, then later, when some private club is using a spinoff. We know that training sets reflect and amplify all kinds of biases that already exist in people. If on-site security calls police without carefully watching the footage that prompted the alert, you can imagine that something harmless could escalate.

Actually, in another version of this scenario, no one has to call because police get direct access to all participating cameras. That’s the approach that Fusus, another rapidly growing player in this space, is taking. Last year, the company even managed to persuade Rialto, California, to require every new commercial and residential development in the city of 100,000 to install a Fusus-enabled camera system.

Like Ambient.ai, Fusus has also marketed itself as a means to stop school shootings. The Georgia-based company has forged partnerships with schools in California, Florida, North Carolina, and Ohio. It’s not hard to imagine why parents would want to embrace a program that could potentially alert police to the presence of a gun or intruder before someone gets hurt. Nor is it difficult to understand why some privacy advocates say that constantly recording young people does not facilitate a great learning experience.

Either way, one of Ambient.ai’s most cited and compelling school-shooter case studies hints at another potential downside of these systems: the illusion of security. As Inc reported last year, founder Shikhar Shrestha at first had trouble attracting investors despite his pitch to “prevent every physical security incident possible.” When he learned that a teacher was assaulted at the Harker School, a prep school in San Jose, and the intruder was captured on a camera no one was watching, he pitched them on his system.

In a marketing brochure, the school’s director of security raves about the system. He states that faculty and staff “feel safer,” adding, “You can’t get that feeling without having a system that works.” The brochure also references, as an example of success, an incident in which someone ended up in police custody.

Sadly, while reading up on the school, I happened to learn that in August, a student drowned in the pool. The system did not operate in that area, and you can understand why a school wouldn’t want it to. But it struck me as a reminder that a vigilant person always has something over a vigilant video system.