
A research has cautioned that cybercriminals are using ChatGPT, which is powered by artificial intelligence (AI) and provides human-like responses to inquiries, to create harmful tools that can steal your data.
Check Point Research (CPR) experts have identified the first such examples of cybercriminals exploiting ChatGPT to create malicious code.
Threat actors build “infostealers,” encryption tools, and support fraudulent activities in underground hacking forums.
The researchers issued a warning regarding the rapidly expanding use of ChatGPT by cybercriminals to scale and instruct malicious activity.
“Cybercriminals are finding ChatGPT attractive. In recent weeks, we’re seeing evidence of hackers starting to use it to write malicious code. ChatGPT has the potential to speed up the process for hackers by giving them a good starting point,” said Sergey Shykevich, Threat Intelligence Group Manager at Check Point.
ChatGPT has the potential to be used for both beneficial objectives, such as assisting developers in the authoring of code, as well as for harmful ones.
On the 29th of December, a renowned underground hacker site included a topic titled “ChatGPT – Benefits of Malware.”
The publisher of the thread revealed that he was attempting to mimic malware strains and strategies outlined in research publications and articles about prevalent malware by using ChatGPT in his experiments.
“While this individual could be a tech-oriented threat actor, these posts seemed to be demonstrating less technically capable cybercriminals how to use ChatGPT for malicious purposes, with real examples they can immediately use,” the report
A threat actor published a Python script on the 21st of December, highlighting the fact that it was the “first script he ever developed.
The hacker claimed that OpenAI offered him a “good (helping) hand to finish the script with a nice scope” after another cybercriminal remarked that the code’s style is similar to OpenAI code.
This could imply that wannabe cybercriminals with little to no development experience could use ChatGPT to create malicious programs and advance to become fully-fledged cybercriminals with the necessary technical skills, the paper said.
“Although the tools that we analyse are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools,” Shykevich said.
OpenAI, the company behind ChatGPT, is apparently seeking funding at a valuation of around $30 billion.
Microsoft paid $1 billion for OpenAI and is now promoting ChatGPT applications for tackling real-world challenges.