Compliance with AI in the company

Compliance with AI in the company

Share post

The almost limitless possibilities of LLMs (Large Language Models) are incredibly exciting. Each month there are new uses for these tools that are not always compatible with corporate compliance policies.

The possibilities range from quickly and inexpensively creating thumbnails for blog posts, to the behavior of people who fundamentally oppose and criticize a blog post, to coherently translating blog posts into other languages ​​- while fully understanding the content. Even in the world beyond blog posts, this nascent tool holds tremendous potential. But potential also means nobody knows what the technology will end up doing, and that creates serious, tangible business risks.

recognize risks

When it comes to LLM, many people first think of OpenAI's ChatGPT as the most accessible and easy-to-use tool available today. ChatGPT allows anyone to ask questions through a web browser, and it generates quick responses. For the average user, that's great, but for any business, this ease of use presents three serious problems. All data sent to ChatGPT is stored by OpenAI for quality and tracking purposes, the source of each response is unknown and the response may be entirely fabricated.

OpenAI tracks user input to improve its product and to prevent possible abuse of the system. This is perfectly reasonable behavior on the part of OpenAI, but there's no reason to think OpenAI is particularly careful with this data. Additionally, OpenAI can't really control what data is sent to ChatGPT, so it ends up storing a lot of information that shouldn't actually be stored at all. For example, it was recently revealed that Samsung employees used ChatGPT to troubleshoot proprietary software and summarize session logs.

Uncertain Sources

The second problem is that ChatGPT answers requests without citing sources for its answers. ChatGPT makes claims based on its findings and unfortunately users have to take this for what it is. This information could be copyrighted and users would not know until they received a letter from attorneys claiming copyright infringement. Getty Images has sued the creator of a tool used to generate LLM images for copyright infringement after the tool was found to have generated images with a Getty Images watermark. It is not unreasonable to argue that any end users using these images could also be held liable. Code generation is another area of ​​concern, although there have been no concrete examples as of yet, but it seems inevitable.

Invented Answers

The biggest problem with ChatGPT, however, is that these LLMs can have "hallucinations," which industry jargon calls "blatant lies." Since ChatGPT has no transparency on how the answers come about, users need to be able to critically analyze the answers to decide if they are true, as ChatGPT and its ilk will always answer with absolute certainty. This can have consequences ranging from Google's embarrassing start to a professor flunking all of his students after ChatGPT falsely claimed to have written student papers - to the most egregious example where a lawyer used ChatGPT to draft his brief in court and ChatGPT invented six completely bogus sources to support the argument. Google's bad example caused its market valuation to drop by $100 billion, the university professor's credibility to be lost, and a multimillion-dollar lawsuit to be dismissed in court. These are extremely serious consequences that should actually be avoided.

ensure compliance

Companies are responding to these facts with policies that prohibit the use of OpenAI without permission. However, it is difficult to track compliance with these policies as traffic passes through an encrypted browser with no software required.

Vectra AI is able to detect these activities on the network by using network sensors for detection. Vectra NDR is designed to monitor all network traffic across the enterprise, with carefully placed sensors monitoring traffic in and out of the network, but also within the network itself. This depth of analysis is the foundation for the powerful AI-driven detection framework and also enables insightful insights into compliance, summarized in the ChatGPT Usage Dashboard. The dashboard, which Vectra is making available to all platform customers free of charge, shows the hosts in the environment that are actively interacting with OpenAI by tracking DNS queries to OpenAI servers made by any host. Not only can compliance officers quickly see a list of people who have accounts with OpenAI or have expressed an interest, but they can also actively monitor who exactly is using a system and how often.

visualize access

The dashboard uses Vectra's patented Host ID mapping technology to track these users in depth. Even if machines' IP addresses change or a notebook joins a new network, Vectra tracks it as the same machine, so the dashboard shows exactly how often a device is accessing ChatGPT - which also allows to quickly see who Vectra thinks is the likely owner of a machine. With this dashboard, security officers not only see which hosts are using ChatGPT, but also know who to contact regarding this, and have that information in minutes, not hours.

This is just one example of how the Vectra platform provides deep insights into what is happening in the organization and how it can help with compliance by tracking active compliance issues. This is just as reliable as the Certificate Expiry Dashboard, which monitors expiring certificates that are actively used in the company, or the Azure AD Chaos Dashboard, which monitors MFA bypasses. Vectra can provide compliance officers with valuable insight into not just what is misconfigured in the organization, but what security risks are actively present.

More at Vectra.com

 


About Vectra

Vectra is a leading provider of threat detection and response for hybrid and multi-cloud enterprises. The Vectra platform uses AI to quickly detect threats in the public cloud, identity and SaaS applications, and data centers. Only Vectra optimizes AI to recognize attacker methods - the TTPs (Tactics, Techniques and Processes) that underlie all attacks - rather than simply alerting on "different".


 

Matching articles on the topic

Cybersecurity platform with protection for 5G environments

Cybersecurity specialist Trend Micro unveils its platform-based approach to protecting organizations' ever-expanding attack surface, including securing ➡ Read more

Data manipulation, the underestimated danger

Every year, World Backup Day on March 31st serves as a reminder of the importance of up-to-date and easily accessible backups ➡ Read more

Printers as a security risk

Corporate printer fleets are increasingly becoming a blind spot and pose enormous problems for their efficiency and security. ➡ Read more

The AI ​​Act and its consequences for data protection

With the AI ​​Act, the first law for AI has been approved and gives manufacturers of AI applications between six months and ➡ Read more

Windows operating systems: Almost two million computers at risk

There are no longer any updates for the Windows 7 and 8 operating systems. This means open security gaps and therefore worthwhile and ➡ Read more

AI on Enterprise Storage fights ransomware in real time

NetApp is one of the first to integrate artificial intelligence (AI) and machine learning (ML) directly into primary storage to combat ransomware ➡ Read more

DSPM product suite for Zero Trust Data Security

Data Security Posture Management – ​​DSPM for short – is crucial for companies to ensure cyber resilience against the multitude ➡ Read more

Data encryption: More security on cloud platforms

Online platforms are often the target of cyberattacks, such as Trello recently. 5 tips ensure more effective data encryption in the cloud ➡ Read more