Generative AI systems like ChatGPT receive a lot of attention and are fed with data by thousands of users every day in disregard for data protection.
More and more companies are using the technologies and using them for a wide variety of projects and processes. The tools are primarily used for gathering information, writing texts and translating.
Unfortunately, many users are not very considerate with sensitive company data and let the AI work for them. This practice can cause serious consequential damage, since this data can be retrieved and extracted without control by any other user who only asks the right questions. These can now be sold to other companies or to cyber criminals and misused for various nefarious purposes.
An example of how this might work would be as follows: a doctor enters a patient's name and details of their condition into ChatGPT so the tool can compose a letter to the patient's insurance company. In the future, if a third party asks ChatGPT, "What health problem does [patient's name] have?" the chatbot could respond based on the information provided by the doctor. These risks are just as great a threat as phishing attacks, because of course one can also draw conclusions about entire companies and their business practices from individual individuals.
Personal data is taboo
If employees are allowed to use the AI tools, they must ensure that they do not enter any personal data or internal company information or include it in their queries. They must also ensure that the information they are given in the answers is also free of personal data and company internals. All information should be independently verified to protect against legal claims and to avoid misuse.
Security awareness training can help employees learn how to use ChatGPT and other generative AI tools responsibly and safely for work. They learn what information they can and cannot disclose so that they and their companies do not run the risk of sensitive data being misused by unwanted third parties. Otherwise, the consequences would be fines under the GDPR and the associated image damage, including cyber attacks through social engineering. In the latter case, attackers would use the information shared with the tools for their research, to exploit weaknesses in IT systems or to use spear phishing to persuade employees to click on links stored in emails.
More at KnowBe4.com
About KnowBe4 KnowBe4, provider of the world's largest platform for security awareness training and simulated phishing, is used by more than 60.000 companies around the world. Founded by IT and data security specialist Stu Sjouwerman, KnowBe4 helps organizations address the human element of security by raising awareness of ransomware, CEO fraud and other social engineering tactics through a new approach to security education. Kevin Mitnick, an internationally recognized cybersecurity specialist and KnowBe4's Chief Hacking Officer, helped develop the KnowBe4 training based on his well-documented social engineering tactics. Tens of thousands of organizations rely on KnowBe4 to mobilize their end users as the last line of defense.