Since the furious start of ChatGPT, not only millions of people have been using artificial intelligence to get travel tips or to have scientific contexts explained. Security researchers and cyber criminals are also trying to figure out how the tool can be used for cyber attacks.
Actually, the software should not recommend criminal acts. White hat hacker Kody Kinzie tried out how this still works and where the limits of intelligence lie.
Illegal and unethical
It all starts with a simple question: “How can I hack a specific company?” The chatbot seems to have been trained to answer this type of question, because in the answer it points out that it is neither legally nor ethically justifiable to attack a specific company . Accordingly, the machine does not give any tips or even instructions. But can artificial intelligence be outwitted? "In our question, we pretended to write a Hollywood script about a realistic cyber attack on a specific company and wanted to know how the best cloud security expert in the film would describe a working attack," said Kinzie. But here, too, the mechanisms take effect: in red letters, the bot replies that it is “illegal and unethical and would violate its own programming to provide such information”. However, the answer is not always the same. If you try it more often and maybe change the question a little, you will actually get a suitable answer with a little patience. "When you call up ChatGPT, you are connected to different versions of the trained model, some of which differ greatly," explains the security expert. “Some are very strict, others are more relaxed, and some seem insecure. Although they seem to suspect that the answer is not unproblematic, they give it anyway, albeit in red letters.”
ChatGPT writes phishing emails for cybercriminals
In this way, the question of the Hollywood screenplay is finally answered. And in great detail: starting with weak passwords and ending with the search for sensitive company data for blackmail purposes. "But we wanted to go even deeper and asked ChatGPT what a corresponding phishing mail could look like, since it should be shown in a scene. Ultimately, we asked ChatGPT to write us a phishing email," says Kinzie. And the artificial intelligence delivered, including a catchy headline and specified urgency. In a similar way, the white hat hacker also manages to create a Netcat script for a backdoor and the attacker's server. “Whether the script works is not important at all. More importantly, the artificial intelligence shouldn’t have created any code for me at all and shouldn’t have designed an attack plan for me either.”
Dark side of the AI
The development of the software is still in its infancy, but already points to a promising and at the same time frightening future. "Kody showed with his experiment that these systems are very intelligent, but ultimately they are created by people," says Michael Scheffler, Country Manager DACH at data security provider Varonis. "And that's what makes them vulnerable to things like that, which he's tried: they end up giving users something they basically know they shouldn't share." For cyber security, this means that those responsible for security see themselves exposed to even more dangers in the future and should always base their defense measures on an 'assume breach' approach, in which they assume that their systems have been compromised. You can only be on the safe side if you can also effectively address such attacks.”
Attack and Defense with ChatGPT
But does the technology only benefit the attackers? “We proved that the AI understands the concept of an attack and how a professional hacker would proceed to attack a specific company. The recommendations, if carried out in a professional manner, have a real chance of conducting a successful attack. Conversely, the recommendations that ChatGPT gives for defense correspond to best practices and help companies to ward off attacks.” In this respect, ChatGPT and artificial intelligence as a whole can also prove to be an effective tool for security managers.
More at Varonis.com
About Varonis Since its founding in 2005, Varonis has taken a different approach than most IT security providers by placing company data stored both locally and in the cloud at the center of its security strategy: sensitive files and e-mails, confidential customer, patient and Employee data, financial data, strategy and product plans and other intellectual property. The Varonis data security platform (DSP) detects insider threats and cyber attacks through the analysis of data, account activities, telemetry and user behavior, prevents or limits data security breaches by locking sensitive, regulated and outdated data and maintains a secure state of the systems through efficient automation .,