
Artificial intelligence will also dominate cybersecurity in 2025. One of the areas where it has been used for some time is fake images created using Midjourney and similar tools. But these deepfakes are a thing of the past. Today, real-time deepfakes are becoming the new phishing scam.
A true milestone in deepfake was arguably the image of the Pope in a white down jacket, published in March 2023, which was actually believed to be real by many, even tech-savvy people. But the technology has also been used in the field of moving images for years and has now reached a certain level of maturity. Anyone who uses social media has already come across videos of TV presenters seemingly promoting investments or politicians giving apparently astonishing speeches.
Deepfakes are now even more sophisticated
But all of this is just deepfake 1.0. Like most technologies, fake images have also evolved. Cybercriminals are now increasingly turning to real-time deepfakes. At the beginning of the year, cybercriminals tricked a finance employee of a multinational company into paying $25 million after he appeared to participate in a video conference with his CFO and other colleagues. However, all the other people except the victim were deepfakes. And it was precisely this video conference that dispelled all the Hong Kong employee's doubts. Apparently, everyone present looked and sounded like colleagues the victim knew.
New dimension of insider threats
The Varonis Incident Response Team is increasingly investigating incidents where artificial intelligence is the source of the attack. Recently, state-sponsored attackers from the Asia-Pacific region were discovered. They attempted to plant a spy as an employee within a company. The potential employee pretended to be a different, real person throughout the interview process. Using a deepfake, the spy looked exactly like the (real) LinkedIn profile. And even if you google the applicant's name or visit their Instagram account, the images match the person from the Zoom call. The attack was ultimately uncovered through an alert and escalation from the Varonis Managed Data Detection and Response team that this individual had logged in from a location outside of the company's usual workspace. They had also accessed source code to exfiltrate it, which the security experts prevented.
Subscribe to our newsletter now
Read the best news from B2B CYBER SECURITY once a monthSo far, we've primarily focused on fake phishing emails and videos designed to influence public opinion. We now need to broaden our scope to include persistent attackers who complete seven virtual interview rounds as fake people.
Who is particularly affected?
Particularly exposed employees are ideal templates for these fakes. A company's LinkedIn rock stars, who frequently post on social media and thus put themselves and their faces in the spotlight, are therefore at high risk. What's great from a marketing perspective can unfortunately compromise security.
The groups that are likely to be targeted by these fakes are those who have long been in the attackers' crosshairs: employees with financial authority or who are able to transfer large sums of money within a short period of time. These individuals are still identified using social engineering, although AI can also assist in tracking them down. And as with all attacks involving the payment of large sums of money, victims must be placed in an emotionally charged situation. With ransomware, it's a perceived time pressure—in the case of the Hong Kong financial employee, a Zoom conference in which six people are trying to influence someone to do something.
How can you protect yourself from real-time deepfakes?
Protecting yourself against deepfakes is relatively difficult. Our eyes can now recognize numerous AI-generated images and videos. However, technology is making significant advances that are difficult for us to keep up with. And perhaps, at some point, there will be reliable AI solutions for recognizing AI-generated content. Until then, we can rely on some proven practices from the offline world. For example, a simple callback. As with banking transactions, we should exercise a certain degree of suspicion and call back if necessary.
The lesson from this spy attempt should be that job interviews must at some point take place in person, not just virtually. Finally, the FBI recommends the use of safe words, which provide additional legitimacy.
Expert commentary by Volker Sommer, Regional Sales Director DACH & EE at Varonis Systems.
About Varonis Since its founding in 2005, Varonis has taken a different approach than most IT security providers by placing company data stored both locally and in the cloud at the center of its security strategy: sensitive files and e-mails, confidential customer, patient and Employee data, financial data, strategy and product plans and other intellectual property. The Varonis data security platform (DSP) detects insider threats and cyber attacks through the analysis of data, account activities, telemetry and user behavior, prevents or limits data security breaches by locking sensitive, regulated and outdated data and maintains a secure state of the systems through efficient automation .,