
AIs generate everything for the user—including a wealth of fraudulent content, such as deepfakes, AI voices, or fake profiles. Users can be perfectly deceived by these methods—except for good systems. Only a combination of measures can help combat AI-generated fraud: people, processes, and technology.
Since the emergence of generative AI two years ago, it has captured people's imaginations and permeated more and more areas of life – with numerous advantages, of course, but also new risks. Security researchers at Trend Micro continue to see AI as a key driver of criminal activity in 2025. AI-generated fraud is evolving rapidly and becoming increasingly difficult to detect. The FBI also warns that fraudsters are increasingly using artificial intelligence to improve the quality and effectiveness of their online scams and reduce the time and effort required to deceive their targets, according to Tobias Grabitz, PR & Communications Manager at Trend Micro.
The many faces of AI fraud
AI-based technologies, such as generative AI tools, are legitimate tools that assist in content creation. However, in the hands of criminals, they serve to facilitate crimes such as fraud and extortion. These potentially malicious activities include the misuse of text, images, audio, and video.
Recently reported Google Mandiant says North Korean IT professionals use AI to create personas and images, posing as non-North Koreans, to gain employment with organizations around the world. Once successful, these individuals generate revenue for the North Korean regime, engage in cyberespionage, or even attempt to spread malware to steal information on corporate networks.
Subscribe to our newsletter now
Read the best news from B2B CYBER SECURITY once a month
🔎 Protection at all levels: Interaction of different measures: people, processes and technology (Image: Trend Micro).
Among the most commonly used criminal methods are deepfakes. These AI-generated videos can depict people in a deceptively real way and manipulate their statements. They are used for disinformation campaigns, blackmail, identity theft, or even fraudulent financial transactions.
Voice clones are also popular: AI can be used to replicate voices. Vishing (voice phishing) is used by fraudsters to pose as family members, business partners, or government officials over the phone and thus obtain sensitive information or money.
AI-generated, automated fake social media profiles can be used for social engineering, spreading misinformation, or even financial fraud. Similarly, AI-powered chatbots can deceive users in fake online stores or on fraudulent websites to obtain personal or payment information.
Finally, AI can be used to create personalized and convincing phishing emails that are difficult to distinguish from genuine messages. These emails can now be tailored even more specifically to victims, making detection more difficult. Last but not least, AI tools are also useful in forging documents, for example, for fake job applications or loan applications.
Protection at all levels
Combating AI-generated fraud requires a combination of measures: people, processes and technology must be taken into account.
People

🔎 Protection at all levels: Interaction of different measures: people, processes and technology (Image: Trend Micro).
Awareness and media literacy: Broad education about the potential for AI-based manipulation is crucial. Users must learn to critically handle information and question content. Traditional phishing awareness training, which focused on incorrect language, poor grammar and spelling, and suspicious URLs as indicators, is no longer sufficient. Modern attack simulations and training courses focusing on new threats and attacker behavior are recommended. This also includes promoting media literacy to better identify fake content.
The FBI also recommends:
- Be skeptical of unexpected requests and verify information directly with the source.
- Recipients should also pay attention to subtle imperfections in images/videos (e.g., unnatural hand positions, facial irregularities, strange shadows, or unrealistic movements).
- When making calls, check for unnatural tone of voice or word choice to detect AI-generated voice cloning.
- Finally, it helps to limit the publicly accessible content of your own images/voice, set social media accounts to private and restrict followers to trusted people.
- Zero-trust approach: The principle behind it is to trust no one by default and to consistently verify identities and content.
Processes

🔎 Protection at all levels: Interaction of different measures: people, processes and technology (Image: Trend Micro).
Verification needs to be rethought: Verifying financial transactions or contracts through verification calls from numbers on lists is obsolete due to AI-assisted vishing. It is recommended to create a predefined stakeholder list, request consent from multiple stakeholders, and use coded language that is used only within your own company. For individuals, the FBI, for example, recommends identifying a secret word or phrase with your family to verify the caller's authenticity.
A policy for GenAI: A company-wide policy for the use of generative AI can enforce the measures for the security of AI use.
Technology
Authentication methods: Stronger authentication methods, such as two-factor authentication, can make access to sensitive accounts more difficult.
Deepfake detection: Tools that can automatically detect deepfakes are still under development and do not always work reliably. Trend Micro is actively researching this area to develop effective detection methods.
AI-based fraud detection: AI can also be used to detect fraud by analyzing patterns in large amounts of data and identifying suspicious activity. Trend Micro offers a solution for consumer users based on artificial intelligence, ScamCheck.
Advanced email security: Email security solutions should go beyond traditional gateways. Typography analysis and computer vision, which can "see" down to the level of individual pixels, can play an important role in detecting fake login pages, for example.
Conclusion
AI-generated fraud is a serious threat that affects us all. By combining technological solutions, clear processes, education, and vigilance, both consumers and businesses can better protect themselves against this growing threat. It's important to be aware of the risks and handle information critically. The constant evolution of AI requires us to continually adapt our defenses to stay one step ahead of fraudsters.
More at TrendMicro.com
About Trend Micro As one of the world's leading providers of IT security, Trend Micro helps create a secure world for digital data exchange. With over 30 years of security expertise, global threat research, and constant innovation, Trend Micro offers protection for businesses, government agencies, and consumers. Thanks to our XGen™ security strategy, our solutions benefit from a cross-generational combination of defense techniques optimized for leading-edge environments. Networked threat information enables better and faster protection. Optimized for cloud workloads, endpoints, email, the IIoT and networks, our connected solutions provide centralized visibility across the entire enterprise for faster threat detection and response.