AI drives silent arms race in security field

Dec. 30—ABOUT THIS SERIES — The applications of artificial intelligence — AI — are growing exponentially and will continue to do so as the technology advances even more.

Today, CNHI and the Times West Virginian continue an ongoing series looking at AI and its potential benefits and concerns in various parts of everyday life. This latest part revisits AI and its use when it comes cybersecurity. Previous parts of the series have looked at AI's use in education, health care, business, social media, emergency response, travel and journalism.

Artificial intelligence can provide a new frontline in the perpetual war between white-hat and black-hat hackers.

AI has the potential to be a game changer when it comes to digital security because of AI's capability to detect threats, experts say. Its ability, thanks to algorithms and machine learning, to sift through an ocean of data to pinpoint and neutralize threats puts it far beyond human capability, perhaps offering an ever-alert, tireless sentinel safeguarding important digital fortresses.

"AI is akin to a double-edged sword. On the one hand, it's the vigilant guardian of the digital realm," Joseph Harisson, CEO of the Dallas-based IT Companies Network, said. "AI algorithms act like digital bloodhounds, sniffing out anomalies and threats with a precision that human analysts might miss."

However, it's that awesome power to quickly analyze large datasets that also makes AI a potent tool for criminals and other malicious actors.

"They use AI to craft more sophisticated cyberattacks, turning the hunter into the hunted," Harisson said. "These AI-powered threats are like chameleons, constantly evolving to blend into their digital surroundings, making them harder to detect and thwart. It's a perpetual cat-and-mouse game, with both sides leveraging AI to outmaneuver the other."

Researchers are building computer networks that resemble the structure of the human brain which leads to breakthroughs in AI research. This research isn't just used to power cybersecurity, but enhance real-world security as well. Biometric research, such as fingerprints and facial recognition helps law enforcement secure important sites like airports and government buildings. Security companies also use these technologies to secure their clients' property. It's even reached the home sector, with companies like Ring providing home security solutions.

Katerina Goseva-Popstojanova, professor at the Lane Department of Computer Science and Engineering at West Virginia University, said AI has been part of the cybersecurity landscape for a long time. Machine learning, an integral part of AI, has been used for various purposes in the field.

Take anti-virus software, Goseva-Popstojanova said. Software like Norton or Kaspersky anti-virus has in-built AI which is trained on known viruses so it can detect viruses on host machines. Email spam filters work the same way. Although chatGPT has made AI a household issue, the technology itself has been in use for a long time behind the scenes.

Tearing down fortresses

Aleksa Krstic, CEO of Localizely, a Belgrade, Serbia-based software-as-a-service translation platform, said AI-powered cameras can analyze video feeds in real-time and identify objects or potential threats.

"AI algorithms can recognize individuals, enabling more effective access control and tracking," he said. "AI systems can learn what 'normal' behavior looks like in a specific environment and raise alerts when deviations occur."

However, AI can also be used to tear down the cyber fortresses governments and companies create. Krstic said AI can automate attacks at scale, generating sophisticated phishing emails or launching automated botnets. AI, through deep fake videos and its ability to generate at scale, can spread misinformation or manipulate public opinion for personal gain.

"The way I look at these days, everything can be used for good or bad," Goseva-Popstojanova said. "So let's say dynamite. You can use dynamite to make tunnels or mines or you can use dynamite to go to kill people. It's the same with AI."

Goseva-Popstojanova said generative AI like ChatGPT can be used by cybercriminals to scour the internet for publicly available information to quickly build a profile of a person. That profile can be used in the furtherance of a crime, whether it's identity theft, scamming or spamming. The weakest link in cybersecurity is the human element. Social engineering, the use of social skills to manipulate an individual into performing a desired action, becomes much easier with AI tools such as deep fakes or voice impersonation.

"There's something that's called phishing, or vishing, if it's done by phone and now it is done by text messages, where somebody pretends to be somebody and scams the person," she said. "One of the reasons the MGM resorts attack happened, was it wasn't anything sophisticated. Just somebody who used a social engineering attack to get the information necessary to log into their system."

A cyber attack on MGM Resorts this fall cost the company millions of dollars in lost revenue and exposed the personal information of tens of millions of loyalty reward customers, as well as disabled some onsite computer systems.

Fooling AI

Within the physical world, criminals can resort to tactics like face spoofing to fool AI. The technique can involve simple measures, like using a photo of a person to fool facial recognition. Or, if someone wants to avoid recognition in public, a hoodie made from a special material that reflects light differently from skin can be employed to break the facial recognition algorithm. More sophisticated AI can look for signs of life to avoid being fooled by a photo, however, a video of a person's face might do the trick. Makeup, masks, 3d masks can all be used. Finally, there's hacking the database itself and changing the parameters so that the attacker's face or fingerprint is allowed by the system.

Adversarial machine learning is the field of research looking at how machine learning can be used to attack other AI systems. Goseva-Popstojanova said it's a huge field of research today, looking for ways algorithms can be fooled into classifying malicious activity as not malicious. This allows researchers to find more robust ways to secure a system. A previous version of ChatGPT could be fooled into releasing an individual's private information, like emails or home addresses, by spamming the AI with specific words repeatedly. Researchers deliberately worked on ways to break the AI to release this information and then reported it to OpenAI to patch the flaw.

One thing is clear: Pandora's box is open and AI is part of the world now, officials said. Although machine algorithms and code are behind the veneer of everyday life, the invisible war between white-hat and black-hat hackers will define life for people all around the world.

In October, FBI Director Christopher Wray spoke at a conference with leaders from the Five Eyes, a coalition of the U.S., the U.K., Canada, Australia and New Zealand. The coalition emerged in the wake of WWII to share intelligence and collaborate on security. The conference was aimed at China, which Wray called the foremost threat to global innovation, and accused the country's government of stealing AI research in furtherance of its own hacking efforts. Thus AI extends from the individual level through the global policy level.

"We are interested in the AI space from a security and cybersecurity perspective and thus proactively aligning resources to engage with the intelligence community and our private sector partners to better understand the technology and any potential downstream impacts," the FBI national press office wrote in an email. "The FBI is particularly focused on anticipating and defending against threats from those who use AI and Machine Learning to power malicious cyber activity, conduct fraud, propagate violent crimes, and threaten our national security. We are working to stop actors who attack or degrade AI/ML systems being used for legitimate, lawful purposes."

Dhanvin Sriram, founder of Prompt Vibes and an AI expert, said machine learning has more than proved its worth by swiftly analyzing data and finding patterns that might indicate risk. However, caution must be employed when assessing any new paradigm-shifting technology.

"The real challenge is to develop AI systems that not only beef up defenses but also outsmart malicious AI," he said. "It's a constant cat-and-mouse game where staying ahead requires ongoing innovation and a mindful approach to ethical considerations. In this dynamic security landscape, the clash between AI-driven defense and malicious AI underscores the need for continuous advancements to ensure AI remains a force for protection, not exploitation."

Reach Esteban at efernandez@timeswv.com