AI as a dark force in cybercrime

AI as a dark force in cybercrime

Share post

Two research reports show the current use of AI for attacks and, on the other hand, the attitude of cyber criminals to artificial intelligence is analyzed by examining dark web forums. The surprise: Not every criminal is convinced of the benefits of AI.

Sophos today published two reports on the use of AI in cybercrime. The report “The Dark Side of AI: Large-Scale Scam Campaigns Made Possible by Generative AI” uses a specific case study to examine how fraudsters could use technologies such as ChatGPT in the future to carry out large-scale fraud attacks with minimal technical skills. The second report “Cybercriminals Can’t Agree on GPTs” presents the investigation of various dark web forums. The results show that some cybercriminals are skeptical about the current use of chatbots and other AI technologies, despite the potential of the new technology.

“We found numerous posts on the Dark Web about the potential negative impacts of AI on society and the ethical implications of its use. In other words, at least for now, it appears that cybercriminals are having the same debates about artificial intelligence as the rest of us,” said Christopher Budd, director of X-Ops research at Sophos.

AI: The dark side of artificial intelligence

Using a simple eCommerce template and great language model tools like GPT-4, Sophos X-Ops was able to create a fully functional website with AI-generated images, audio and product descriptions, as well as a fake Facebook login page and a fake checkout page, to collect personal login and credit card information. Minimal technical knowledge was required to create and operate the website. The same tool also made it possible to create hundreds of similar websites in minutes with just the click of a button.

“It is natural and expected that criminals will use new technologies to automate their operations in order to be as effective as possible,” said Ben Gelman, senior data scientist at Sophos. “The original creation of spam emails was a crucial step in the fraud history as it significantly changed the scale of the criminal playing field. The current development in artificial intelligence has a similar potential: as soon as there is an AI technology that can generate complete, automated threats, it will be used. The currently observed integration of generative AI elements into classic fraud, for example through AI-generated texts or photos to attract victims, is just the beginning.”

Regarding the intentions of the current research project, Gelman says: “One reason we are conducting the current project is to stay one step ahead of the criminals. By creating a system for creating fraudulent websites at scale that is more advanced than the tools criminals currently use, we have a unique ability to analyze and prepare for the threat before it spreads.”

Cybercriminals question the potential of GPTs & Co

In the second part of its AI research offensive, Sophos While the use of AI by cybercriminals appears to be in its infancy, threat actors on these platforms are already intensively discussing the potential for social engineering. An example of this is the current “pig booking” wave of romance scams.

In addition, Sophos found that most posts related to compromised ChatGPT accounts for sale and "jailbreaks" - ways to bypass the protections built into LLMs, allowing cybercriminals to abuse the tools for malicious purposes. The research team also found ten ChatGPT applications that developers claimed could be used for cyberattacks and malware development. However, the effectiveness of such tools was strongly doubted by parts of the dark web community and was sometimes even seen as an attempt to cheat with useless programs.

Debates about artificial intelligence

“While we have seen some cybercriminals attempt to create malware or attack tools using LLMs, the results were rudimentary and often met with skepticism from other users. We even found numerous posts about the potential negative impact of AI on society and the ethical implications of its use. In other words, at least for now, it appears that cybercriminals are having the same debates about artificial intelligence as the rest of us,” said Christopher Budd, director of X-Ops research at Sophos.

For more information on AI-generated scam websites and threat actors' attitudes toward LLMs, see the full English reports “The Dark Side of AI: Large-Scale Scam Campaigns Made Possible by Generative AI” and “Cybercriminals Can’t Agree on GPTs” a DAK Bungalow.

More at Sophos.com

 


About Sophos

More than 100 million users in 150 countries trust Sophos. We offer the best protection against complex IT threats and data loss. Our comprehensive security solutions are easy to deploy, use and manage. They offer the lowest total cost of ownership in the industry. Sophos offers award-winning encryption solutions, security solutions for endpoints, networks, mobile devices, email and the web. In addition, there is support from SophosLabs, our worldwide network of our own analysis centers. The Sophos headquarters are in Boston, USA and Oxford, UK.


 

Matching articles on the topic

IT security: NIS-2 makes it a top priority

Only in a quarter of German companies do management take responsibility for IT security. Especially in smaller companies ➡ Read more

Cyber ​​attacks increase by 104 percent in 2023

A cybersecurity company has taken a look at last year's threat landscape. The results provide crucial insights into ➡ Read more

The AI ​​Act and its consequences for data protection

With the AI ​​Act, the first law for AI has been approved and gives manufacturers of AI applications between six months and ➡ Read more

MDR and XDR via Google Workspace

Whether in a cafe, airport terminal or home office – employees work in many places. However, this development also brings challenges ➡ Read more

Mobile spyware poses a threat to businesses

More and more people are using mobile devices both in everyday life and in companies. This also reduces the risk of “mobile ➡ Read more

Crowdsourced security pinpoints many vulnerabilities

Crowdsourced security has increased significantly in the last year. In the public sector, 151 percent more vulnerabilities were reported than in the previous year. ➡ Read more

AI on Enterprise Storage fights ransomware in real time

NetApp is one of the first to integrate artificial intelligence (AI) and machine learning (ML) directly into primary storage to combat ransomware ➡ Read more

Digital Security: Consumers trust banks the most

A digital trust survey showed that banks, healthcare and government are the most trusted by consumers. The media- ➡ Read more