Is ChatGPT changing the future of IT security?

Is ChatGPT changing the future of IT security?

Share post

Open AI's ChatGPT software is making waves. With the help of AI, the chatbot answers a wide variety of questions very eloquently. So it should come as no surprise that, as is always the case with new technologies, criminals are already thinking about how to use these capabilities for their own ends. 

A joint study by Europol, Unicri and Trend Micro investigated this. According to these results, even better generated social engineering tools such as phishing or BEC could be considered for the Open AI software. Currently, other security researchers have also examined fraudulent emails generated with the currently hyped AI and received threatening results. Also, the criminals already seem to be discussing ways to bring ChatGPT to the dark web in underground forums.

The role of ChatGPT

It is clear that ChatGPT does not play the role of the legendary "Pandora's Box" that has now been opened, but merely represents the first stage of a foreseeable development that will sooner or later lead to the use of AI in cybercrime. For the makers of Open AI, the release may be the preliminary culmination of a long development, but for IT, the journey is just beginning. ChatGPT shows new possibilities and will inspire various others to do the same. The underlying technology will be available to the general public and used cost-effectively and sensibly as a component of larger solutions. A common and also positive development for IT. The reverse of the coin is that criminals will also be able to access it and use the technology efficiently.

Phishing/attack emails

Phishing e-mails are part of the standard program for most types of attack, in order to create an initial starting point for further criminal activities. And that is often not so easy. That's why there is also a separate service sector in the digital underground, which offers so-called "Access-as-a-Service" with good quality — a question of price, nothing more. The problem for criminals is that both technology and human intuition are capable of intercepting attacks, which is why they are either launched en masse (billions) or, at great expense, with human interaction.

AI to increase effectiveness sounds very tempting for this group of perpetrators. The current state of the art with ChatGPT can help formulate emails more credibly. The expression and spelling mistakes that are often mentioned in security training courses are eliminated, and new content can be generated more quickly so that the mails appear "more attractive".

Error-free emails fool users more easily

But "better" phishing emails aren't the worries security researchers have when considering the use of AI in Access-as-a-Service. Rather, it is feared that it could be possible to create an AI in such a way that it is able to conduct interesting communication with a victim and thus reduce the currently enormous effort involved in targeted attacks. If this succeeds, future phishing mails can no longer be distinguished from real business communication.

The victims are then confronted with personal attacks, in which knowledge available via social media, for example, is integrated. The Emotet attack variant has already demonstrated the effects this can have by writing its attack email as a response to an email previously actually sent by the victim, fortunately without AI support - with generically generated content. Dynamite phishing, as this method is known, has already proven to be extremely effective.

Scams – Business Email Compromise (BEC)

The progress made by AI in recent years can be observed above all with regard to language and interpersonal communication. What is also special about ChatGPT is that the software not only uses what already exists, but also generates interesting, new content for people. This suggests that it is used by cybercriminals in this area. There is interpersonal communication in all scam variants.

In the corporate sector, for example, this is the Business E-Mail Compromise (BEC) method, in which an employee is fooled into believing that he is communicating with his boss or an important customer. And there are also a wide variety of variants in the private sphere, such as various romance scams or the notorious “Prince from Nigeria”. In short, the victim is persuaded in direct, often lengthy communication to end up thoughtlessly transferring money to the perpetrator. With the current possibilities of ChatGPT, mails should be culturally optimized, for example with poetry.

language barriers? No problem for the AI

However, here too it is more the further expansion stage that causes concern. The first thing that is likely to fall is language barriers. The AI ​​is already available in German today. Previous translation options can Germanize words or sentences, but only poorly whitewash a person's cultural background. An AI can do this. There are already related techniques in the field of deep fakes that allow people's faces to be exchanged in real time, making identification via video more difficult.

In addition, there are many proven scam schemes. An AI can therefore be easily trained with successful cases and learn how best to convince people. Perpetrators can not only increase their efficiency, but also the number of victims they attack in parallel. But an AI that specializes in fraud can lead to completely different application scenarios that we cannot yet assess at all.

Malware Creation by AI

Richard Werner, Business Consultant at Trend Micro (Image: Trend Micro)

The ChatGPT can also develop and debug software/code. Therefore, there is the theoretical possibility of also generating malware. This has already been proven in proof-of-concept tests. Although the Open AI developers regularly adjust their policies to prevent this, there are always examples of how it works.

There are still hurdles: The code created works, but requires a precise description by the requester. It is also being discussed that code generated by AI is not yet convincing, especially in software development. ChatGPT is therefore likely to be of relatively little use to determined cybercriminals today. However, the expansion stage is also foreseeable here.

It is clear that sooner or later attack software will be written by AI. For example, this reduces the time between discovering a vulnerability, patching it and a cyber attack on it to just a few minutes - even for criminals who are not technically talented. But all of these are well-known challenges for which there are solutions in the field of security. Already, the industry is faced with more than 300.000 new malicious files per day. We're talking about zero days and a few hours, and AI has long since taken over the work for us, too, primarily to identify unknowns in the type of procedure.

Conclusion: Is ChatGPT changing the future of IT security?

ChatGPT did not initiate this development, but only demonstrated to the public what has long been discussed in scientific circles. It can be assumed that the use of AI will play a role, especially in the initial infection. Here Access-as-a-Service competes for customers, and the AI ​​can minimize enormous efforts or improve the success of mass attacks. As a result, companies must assume that attackers will be able to circumvent their primary defense mechanisms even more than before.

More at TrendMicro.com

 


About Trend Micro

As one of the world's leading providers of IT security, Trend Micro helps create a secure world for digital data exchange. With over 30 years of security expertise, global threat research, and constant innovation, Trend Micro offers protection for businesses, government agencies, and consumers. Thanks to our XGen™ security strategy, our solutions benefit from a cross-generational combination of defense techniques optimized for leading-edge environments. Networked threat information enables better and faster protection. Optimized for cloud workloads, endpoints, email, the IIoT and networks, our connected solutions provide centralized visibility across the entire enterprise for faster threat detection and response.


 

Matching articles on the topic

Report: 40 percent more phishing worldwide

The current spam and phishing report from Kaspersky for 2023 speaks for itself: users in Germany are after ➡ Read more

IT security: NIS-2 makes it a top priority

Only in a quarter of German companies do management take responsibility for IT security. Especially in smaller companies ➡ Read more

Stealth malware targets European companies

Hackers are attacking many companies across Europe with stealth malware. ESET researchers have reported a dramatic increase in so-called AceCryptor attacks via ➡ Read more

Cyber ​​attacks increase by 104 percent in 2023

A cybersecurity company has taken a look at last year's threat landscape. The results provide crucial insights into ➡ Read more

IT security: Basis for LockBit 4.0 defused

Trend Micro, working with the UK's National Crime Agency (NCA), analyzed the unpublished version that was in development ➡ Read more

The AI ​​Act and its consequences for data protection

With the AI ​​Act, the first law for AI has been approved and gives manufacturers of AI applications between six months and ➡ Read more

Mobile spyware poses a threat to businesses

More and more people are using mobile devices both in everyday life and in companies. This also reduces the risk of “mobile ➡ Read more

Test: Security software for endpoints and individual PCs

The latest test results from the AV-TEST laboratory show very good performance of 16 established protection solutions for Windows ➡ Read more