Watch out - ChatGPT is being used to create malware

 An abstract image of digital security.
An abstract image of digital security.

The world's most popular chatbot, ChatGPT, is having its powers harnessed by threat actors to create new strains of malware.

Cybersecurity firm WithSecure has confirmed that it found examples of malware created by the notorious AI writer in the wild. What makes ChatGPT particularly dangerous is that it can generate countless variations of malware, which makes them difficult to detect.

Bad actors can simply give ChatGPT examples of existing malware code, and instruct it to make new strains based on them, making it possible to perpetuate malware without requiring nearly the same level of time, effort and expertise as before.

For good and for evil

The news comes as talk of regulating AI abounds, to prevent it from being used for malicious purposes. There was essentially no regulation governing ChatGPT's use when it launched to a frenzy in November last year, and within a month, it was already hijacked to write malicioius emails and files.

There are certain safeguards in place internally within the model that are meant to stop nefarious prompts from being carried out, but there are ways threat actors can bypass these.

read more

> Hackers are using ChatGPT to write malware

>
Fake ChatGPT apps are being used to push malware

>
Is ChatGPT becoming a serious security risk for your business?

Juhani Hintikka, CEO at WithSecure, told Infosecurity that AI has usually been used by cybersecurity defenders to find and weed out malware created manually by threat actors.

It seems that now, however, with the free availability of powerful AI tools like ChatGPT, the tables are turning. Remote access tools have been used for illicit purposes, and now so too is AI.

Tim West, head of threat intelligence at WithSecure added that “ChatGPT will support software engineering for good and bad and it is an enabler and lowers the barrier for entry for the threat actors to develop malware.”

And the phishing emails that ChatGPT can pen are usually spotted by humans, as LLMs become more advanced, it may become more difficult to prevent falling for such scams in the neat future, according to Hintikka.

What's more, with the success of ransomware attacks increasing at a worrying rate, threat actors are reinvesting and becoming more organized, expanding operations by outsourcing and further developing their understanding of AI to launch more successful attacks.

Hintikka concluded that, looking at the cybersecurity landscape ahead, "This will be a game of good AI versus bad AI."