Generative AI changes attacks and makes them significantly more sophisticated than in the past. It requires a new defense strategy - preferably with a self-learning AI that also recognizes, learns and immediately implements completely new behavioral patterns. A survey by Darktrace of 6.700 employees on how they deal with email in companies.
The most recent case shows what a generative AI can do – the collapse of the Silicon Valley Bank (SVB) and the resulting banking crisis. The attackers immediately used the situation to forge highly sensitive communications. To do this, they intercepted legitimate messages instructing recipients to update their bank details for payroll. This specific incident corresponds with general figures: 62% of employees in financial services companies have noticed an increase in fraudulent emails and text messages in the last six months. Generative AI actually means any kind of artificial intelligence (AI) that creates new text, images, videos, audios, codes or synthetic data.
Attacker AI vs Defender AI
Generative AI therefore requires a new defense strategy based on self-learning AI. Unlike all other email security tools, in Darktrace Emails it is not trained on what "attacks" look like, but learns the normal behavior patterns in each individual company. With a deep understanding of the company and how each employee interacts with their inbox, the AI can determine for each email whether it is suspicious or legitimate. In particular, e-mails from the CEO are thus better protected.
New data from Darktrace shows that email security solutions, including native, cloud-based and static AI tools, take an average of thirteen days to detect an attack on a victim. Then companies are left unprotected for almost two weeks if they rely solely on these tools. Social engineering - particularly malicious cyber campaigns via email, as in the banking example above - remains the primary cause of an organization's vulnerability to attack. Widespread access to generative AI tools like ChatGPT, along with the increasing sophistication of state actors, mean email scams are more compelling than ever.
More results of the survey
- 87% of workers worldwide are concerned that hackers can use generative AI to create fraudulent emails that are indistinguishable from real communications.
- The top three communication characteristics that have led employees to mistake an email for a phishing attack are: request to click a link or open an attachment (72%), unknown sender or unexpected content (61% ) and poor spelling and grammar (62%).
- A quarter (25%) of employees have fallen for a fraudulent email or SMS in the past.
65% of employees have noticed an increase in the frequency of fraudulent emails and text messages in the last six months. - 87% of employees are concerned about the amount of personal information available about them online that could be used for phishing and other email scams.
- For 87% of businesses, spam filters are incorrectly blocking important legitimate email from reaching their inboxes.
More than one in three respondents have tried ChatGPT or other generative AI chatbots (36%).
These numbers come from a global survey Darktrace conducted in March 2023 in partnership with Censuswide of 6.711 employees in the UK, US, France, Germany, Australia and the Netherlands. The aim was to gain insight into human behavior in relation to email and to better understand how employees around the world react to potential security threats, how they understand email security and which modern technologies are used as tools to counter threats.
More at Darktrace.com
About Darktrace Darktrace, a global leader in artificial intelligence for cybersecurity, protects businesses and organizations with AI technology from cyberattacks. Darktrace's technology registers atypical traffic patterns that indicate possible threats. In doing so, it recognizes novel and previously unknown attack methods that are overlooked by other security systems.