Welcome Distrust: Cybersecurity and ChatGPT

Welcome Distrust: Cybersecurity and ChatGPT

Share post

So far, organizations have had their biggest weakness in the fight against cybercrime well under control: employees have been successfully trained and sensitized. But with AI-generated social engineering scams comes a new wave. Until the technology is mature, humans have to play the watchdog, says Chester Wisniewski, Field CTO Applied Research at Sophos and makes three predictions for the future.

Organizations have grappled with one of their most critical cybersecurity components: their people. They counter the "human weakness" with continuous training and now often trust that users will recognize potential phishing attacks due to linguistic irregularities or incorrect spelling and grammar, for example.

The AI ​​no longer reveals itself through spelling mistakes

But AI-powered voice and content generators like ChatGPT are well on their way to removing these telltale elements from scams, phishing attempts, and other social engineering attacks. A fake email from “supervisor” can sound more convincing than ever thanks to artificial intelligence and employees will undoubtedly have a harder time distinguishing fact from fiction. In the case of these scams, the risks posed by AI language tools aren't technical. They're social—and therefore frightening.

The specific risks of AI-generated phishing attacks

From creating blog posts and coding code to composing professional emails, AI language tools can do it all. The technologies are adept at generating compelling content and they are freakishly good at emulating human language patterns.

While we have not yet verified any abuse of these programs for socially engineered content, we suspect it is imminent. ChatGPT has already been used to write malware, and we expect criminal actors to develop malicious applications for AI voice tools soon. But: AI-generated phishing content already poses unique social risks that undermine the technical defense!

AI phishing attacks: Can every email be a threat?

Take AI-generated malware, for example: existing security products can analyze the coding code in milliseconds and confidently assess it as safe or malicious. More importantly, technology can counteract technology.

But words and nuances embedded in phishing messages created with artificial intelligence cannot be detected by machines — it is humans who will interpret these scam attempts as recipients. As AI tools are able to produce sophisticated and realistic content on demand, we can less and less rely on humans as part of the line of defense.

New Fact: Technology vs. Technology

This rapidly evolving situation requires a re-evaluation of the role of security training in combating social engineering attacks. While there is no commercial application against AI-generated fraud yet, technology will increasingly serve as a key tool to identify and protect individuals from machine-generated phishing attacks. Humans will still play a role, but only a very small one.

Three prophecies for the ChatGPT age

While we are still in the early stages of this new AI era, it is already clear that AI-generated phishing content will become a hugely important topic for corporate security strategy. The following three forecasts of how ChatGPT could be used as a tool for cybercrime and which security responses will develop from it seem the most likely:

1. More complex user authentication will become necessary.

Machines are very good at sounding like humans, so there is a need to provide new authentication options in companies. This means: every communication that concerns access to company information, systems or monetary elements must require more complex forms of user authentication. Phone call verification will probably become the most common method of verifying these types of emails or messages. Companies could also use a secret daily password to identify themselves to other entities or individuals.

Some financial institutions already operate this way. Whatever form of verification is used, it is crucial that the method cannot be easily used by attackers who have compromised user credentials.

As AI technologies evolve and spread at breakneck speed, authentication methods need to keep up. For example, in January this year, Microsoft disclosed VALL-E, a new AI technology that can clone a human voice after three seconds of audio recording. So in the near future, the phone call will probably no longer suffice as an authentication requirement either...

2. Legitimate users water down security warnings: all or none

Chester Wisniewski, Field CTO Applied Research at Sophos (Image: Sophos).

Many users use ChatGPT to quickly produce professional or promotional content. This legitimate use of AI language tools complicates security answers by making it harder to identify criminal examples.

Example: Not all emails containing ChatGPT-generated text are malicious, so we can't block them all outright. This, to some degree, waters down our security response. As a countermeasure, security solution providers could develop "trust points" or other indicators that assess the likelihood that a message or email, although AI-generated, is still trustworthy. They could also train AI models to recognize text created with artificial intelligence and place a "caution" banner on user-facing systems. In certain cases, this technology could filter out messages from the employee's email inbox.

3. AI generated fraud will become more interactive

I expect that AI language programs will be used even more interactively by criminals in the future to produce phishing emails or one-time messages. Scammers could use this technology to manipulate individuals via real-time chat.

Messaging services like WhatsApp, Signal or Telegram are end-to-end encrypted, so these platforms are unable to filter fraudulent or AI-generated messages in private channels. This could make them very attractive to scammers who stalk their victims on these platforms.

On the other hand, this development could lead organizations to reconfigure their security solutions. It may be necessary to use filter technologies on individual employee end devices.

Until new technologies take hold, the following applies: distrust all communication

AI language tools ask essential questions for the future of cybersecurity. Figuring out what is real is getting harder, and future developments aren't making it any easier. Technologies will be the primary weapon against AI-driven cybercrime. But now the employees have to step in and learn to distrust all communication. In the age of ChatGPT, that's not an overreaction, it's a critical response.

More at Sophos.com

 


About Sophos

More than 100 million users in 150 countries trust Sophos. We offer the best protection against complex IT threats and data loss. Our comprehensive security solutions are easy to deploy, use and manage. They offer the lowest total cost of ownership in the industry. Sophos offers award-winning encryption solutions, security solutions for endpoints, networks, mobile devices, email and the web. In addition, there is support from SophosLabs, our worldwide network of our own analysis centers. The Sophos headquarters are in Boston, USA and Oxford, UK.


 

Matching articles on the topic

Report: 40 percent more phishing worldwide

The current spam and phishing report from Kaspersky for 2023 speaks for itself: users in Germany are after ➡ Read more

Cybersecurity platform with protection for 5G environments

Cybersecurity specialist Trend Micro unveils its platform-based approach to protecting organizations' ever-expanding attack surface, including securing ➡ Read more

Data manipulation, the underestimated danger

Every year, World Backup Day on March 31st serves as a reminder of the importance of up-to-date and easily accessible backups ➡ Read more

Printers as a security risk

Corporate printer fleets are increasingly becoming a blind spot and pose enormous problems for their efficiency and security. ➡ Read more

The AI ​​Act and its consequences for data protection

With the AI ​​Act, the first law for AI has been approved and gives manufacturers of AI applications between six months and ➡ Read more

MDR and XDR via Google Workspace

Whether in a cafe, airport terminal or home office – employees work in many places. However, this development also brings challenges ➡ Read more

Windows operating systems: Almost two million computers at risk

There are no longer any updates for the Windows 7 and 8 operating systems. This means open security gaps and therefore worthwhile and ➡ Read more

AI on Enterprise Storage fights ransomware in real time

NetApp is one of the first to integrate artificial intelligence (AI) and machine learning (ML) directly into primary storage to combat ransomware ➡ Read more