The OpenAI ChatGPT chatbot proves how artificial intelligence and machine learning can directly determine life and everyday life. Advanced IT users will use such tools for their purposes. And with that, unfortunately, cybercriminals too.
OpenAI's ChatGPT AI model is based on unsupervised learning. With this ML approach, an AI model is fed with a large data set of unlabeled data. The vast body of material from books, articles and websites is based on sources from before 2021 and has no connections to the current internet yet. But that is already enough to learn the structures of natural language and to offer answers to questions that seem deceptively human.
Hackers also benefit from AIs
Hackers can benefit from such help, even if caution is still required with concrete predictions about the use. Five example cases already outline the possibilities of the future:
Automated Phishing:
Hackers can use ChatGPT to launch automated phishing attacks that reach a new level. Spelling and grammar errors, which were previously an obvious indication of such attacks, can now be ruled out. In view of the fact that the authors of many phishing e-mails speak neither English nor German as their mother tongue, this should have major consequences for the quality of such e-mails across the board. ChatGPT can write messages in perfect grammar. The circle of victims who can be outwitted is expanding by leaps and bounds. It can write code to automate the writing process.
pretending of identities
ChatGPT can imitate real people or organizations in a deceptively real way. This will encourage identity theft or other forms of fraud. Hackers use the chatbot to send what appears to be messages from a friend or colleague, asking for sensitive information and thus gaining access to someone else's user account.
Social Engineering
ChatGPT and other AI chatbots can make social engineering attacks more targeted. Hackers manipulate the addressees with even more personalized and seemingly legitimate conversations. Until now, a certain degree of intimacy was not realistically possible without personal physical contact. At the latest, this is no longer the case when the tool uses information from the Internet. The bot's responses are so human-like that they hardly differ from a human response.
Fake support
Companies will use advanced AI chatbots for their customer contact. Hackers will be the free riders and act themselves - for example with a deceptively similar imitation of a fraudulent bank appearance and a seemingly human customer service. Again, the goal is to capture sensitive information. Banking malware has been a pioneer in developing new attack methods in the past.
Accelerated attack development – right down to code help
Forbes has already reported on the first cases. Apparently, hackers use the bot to write malicious code to encrypt and exfiltrate data. ChatGPT is familiar with the most popular coding languages and could reduce the time to develop attacks. A hacker with the required specialist knowledge of network vulnerabilities can use the bot to close the gaps in his code writing more quickly. Reverse engineering a cyberattack that has taken place can be easier. AI can help to modify code, improve it, and tailor attacks better to a chosen target.
ChatGPT can't write error-free code yet, but it can help. It is worrying that security measures are not necessarily effective. The "safety switch" can be bypassed. The switch aims to prevent the AI model from writing violent, discriminatory or sexually explicit texts. It is designed to help identify those questions asked by the user that are clearly seeking answers that serve nefarious purposes. The chatbot still declines direct requests to write Python code to exploit a Log4j vulnerability. However, a knowledgeable user can bypass the protection mechanism. For example, one possibility is to ask the same question in a different language. He finds clues to ask the right questions.
A new level in the future
AI-supported attacks will reach a new level in the future. Victims need to be more suspicious. The classic phishing emails, which are the inconspicuous prelude to many, if not most, serious attacks, are becoming increasingly deceptive. On the other hand, only caution or more data economy on the Internet or in social media can help. Cyber fraud based on the hijacking of an identity becomes more deceptive to the extent that the hackers have information about the supposed sender of a malicious message. Chatbots that act like new search engines create a new quality for existing attack mechanisms.
More at Bitdefender.com
About Bitdefender Bitdefender is a leading global provider of cybersecurity solutions and antivirus software, protecting over 500 million systems in more than 150 countries. Since it was founded in 2001, the company's innovations have consistently ensured excellent security products and intelligent protection for devices, networks and cloud services for private customers and companies. As the supplier of choice, Bitdefender technology is found in 38 percent of security solutions deployed around the world and is trusted and recognized by industry experts, manufacturers and customers alike. www.bitdefender.de