According to security researchers in Israel, hacker gangs are already using artificial intelligence (AI) to train their new members. They are also assisted by AI to improve malware, automate attacks and generally run their criminal operations smoothly.
On the other hand, the management consultants at McKinsey found out through a survey for the year 2022 that 50 percent of the companies surveyed use AI to support work in at least one business area - in 2017 it was only 20 percent. According to Forbes magazine, as many as 60 percent of the entrepreneurs surveyed believe that AI will increase the productivity of their company in general, with 64 percent believing in an increase in operational productivity, while 42 percent relate to the improvement of work processes.
This juxtaposition nicely shows how one and the same technology can be used here for good and bad purposes. The concern that is currently driving the population the most: Can AI learn which areas of application are good or bad in order to prevent misuse on its own?
ChatGPT and Google Bard help hackers
- Automated Cyber Attacks: Since both tools have been on the market, our security researchers have seen a proliferation of bots that remotely control infected computers and other automated systems. These are ideal for DDoS attacks, which can paralyze a server or an entire system with an enormous number of access requests. For this reason too, IT attacks increased by 38 percent worldwide last year.
- Help creating malware, phishing e-mails, deepfakes and the design of cyber attacks: hackers recognized early on that ChatGPT could create command lines for malware or write phishing e-mails - the latter in better English or German than the criminals could often write themselves. Also, since ChatGPT and Google Bard learn with each input, they can even create complicated content, such as images, videos, or even sound recordings. This is where the danger of deepfakes comes into play. This means films that show and speak to a certain person in the way we are used to from them, even though it is a fake. Modern technology enables the falsification of facial expressions, gestures and voice. Barack Obama, Joe Biden, Volodymyr Zelenskyy and Vladimir Putin have already fallen victim to such deepfakes.
- petty criminals without great programming ability can become hackers: With the help of such AI-driven chatbots, hacker gangs can, as mentioned above, train their new members, or people with criminal tendencies can quickly tinker and launch a small IT attack. This leads to a flood of, so to speak, mini-attacks and new hacker groups.
In order to prevent this abuse, the manufacturer of ChatGPT, the company OpenAI, installed some security devices and excluded countries from use, but this did not last long: First, the request blocks were circumvented by cleverly asking the chatbot, then the country restrictions were overturned, then premium accounts were stolen and sold on a large scale and now even clones of ChatGPT are offered on the Darknet in the form of API interfaces.
One way to make the AI itself less vulnerable to abuse is to train the AI itself. Engineers agree that once learned knowledge is impossible to remove from an AI, some sort of ethics catalog should be taught to the AI from the start so that it will follow certain rules by itself. This would also be possible with laws that simply prohibit some actions.
More at Checkpoint.com
About check point Check Point Software Technologies GmbH (www.checkpoint.com/de) is a leading provider of cybersecurity solutions for public administrations and companies worldwide. The solutions protect customers from cyberattacks with an industry leading detection rate for malware, ransomware and other types of attacks. Check Point offers a multi-level security architecture that protects company information in cloud environments, networks and on mobile devices, as well as the most comprehensive and intuitive “one point of control” security management system. Check Point protects over 100.000 businesses of all sizes.