The AI ​​Act and its consequences for data protection

The AI ​​Act and its consequences

Share post

The AI ​​Act is the first law for AI and gives manufacturers of AI applications between six months and three years to adapt to the new rules. Anyone who wants to use AI in sensitive areas will have to strictly control the AI ​​data and its quality and create transparency - classic core disciplines from data management.

The EU has done pioneering work and regulated what is currently the most dynamic and important branch of the data industry with the AI ​​Act, just as it did with the GDPR in April 2016 and the Digital Operational Resilience (DORA) in January of this year. Many of the new tasks from the AI ​​Act will be familiar to data protection officers and every compliance officer from the GDPR. The law sets a definition for AI and defines three security levels: minimal, high and unacceptable. AI applications that companies want to use in healthcare, education and critical infrastructure fall under the highest security category “high risk”. Those in the “unacceptable” category are banned because, for example, they could threaten people’s safety, livelihoods and rights.

Assess risk

These AI systems must, by definition, be trustworthy, transparent and accountable. Operators must conduct risk assessments, use high-quality data, and document their technical and ethical decisions. They must also record how their systems are performing and inform users about the nature and purpose of their systems. In addition, the AI ​​systems should be supervised by humans and enable interventions. They must be highly robust and achieve a high level of cybersecurity.

Companies now need clear orientation. Because they want to use the great potential of this technology and at the same time be prepared for the future in order to be able to implement the upcoming details of the regulation. There are five clear recommendations on how companies can approach this without causing legal risks and still not standing in the way of users. And at the same time, be positioned in such a way that you can fully implement the AI ​​Act without turning IT upside down:

  • Let AI act trustingly: If you want to achieve this, you have to completely tame the AI. The only way to get there is to closely control the data and data flows into and out of AI. This close control is similar to the requirements of the GDPR for personal data. Companies should always consider this compliance when they use and develop AI themselves. If you want to use AI in a GDPR and AI Act-compliant manner, you should seek the advice of a data protection expert before introducing it.
  • Know the data exactly: Much of the law focuses on reporting on the content used to train the AI, the data sets that gave it the knowledge to perform. Companies and their employees need to know exactly what data they are feeding the AI ​​and what value this data has for the company. Some AI providers consciously transfer this decision to the data owners because they know the data best. They must train the AI ​​responsibly. In addition, data access should only be activated for authorized people.
  • The question of copyrights: Previous models of AI have used available internet and book crawlers to train their AI. This was content that contained protected elements - one of the areas that the AI ​​Act aims to clean up. If companies have used such records without accurately labeling them, they may have to start over.
  • Understand the content of the data: This is an essential task. In order for data owners to make correct decisions, the value and content of the data must be clear. In everyday life, this task is gigantic and most companies have accumulated mountains of information that they know nothing about. AI and machine learning can help massively in this area and alleviate one of the most complex problems by automatically identifying and classifying companies' data according to their own relevant record strategy. Predefined filters immediately fish compliance-relevant data such as credit cards, mortgage data or building plans from the data pond and mark them. This analysis also makes it possible to clarify some security parameters and, for example, detect unsecured data. As soon as this AI examines the company data, it develops a company-specific language, a company dialect. And the longer she works and the more company data she examines, the more accurate her results become. The charm of this AI-driven classification is particularly evident when new specifications have to be adhered to. Whatever new things the AI ​​Act brings up in the long term, the ML and AI driven classification will be able to search for these additional attributes and give the company a bit of future security.​​​​​​​
  • ​​​​​​​Control data flows: Once the data is ranked and classified with the correct characteristics, the underlying data management platform can automatically enforce rules without the data owner having to intervene. This reduces the chances of human error and risk. A company could enforce that certain data such as intellectual property or financial data may never be passed on to other storage locations or external AI modules. Modern data management platforms control access to this data by automatically encrypting it and requiring users to authorize themselves using access controls and multifactor authentication.

The AI ​​Act is getting teeth

The EU has another similarity with the GDPR and DORA. Once enacted, sanctions for non-compliance will be enforced. Anyone who violates important requirements of the AI ​​Act must expect penalties of up to 35 million euros or 7 percent of global sales. And just for comparison: the supervisory authorities have imposed fines amounting to 2024 billion euros since the GDPR came into force until February 4,5. The AI ​​law is likely to be published this summer and will come into force 20 days after publication in the EU's Official Journal. Most of its provisions apply after 24 months. The rules for banned AI systems apply after six months, the rules for GPAI after twelve months and the rules for high-risk AI systems after 36 months.

More at Cohesity.com

 


About Cohesity

Cohesity greatly simplifies data management. The solution makes it easier to secure, manage and create value from data - across the data center, edge and cloud. We offer a full suite of services consolidated on a multi-cloud data platform: data backup and recovery, disaster recovery, file and object services, development / testing, and data compliance, security and analytics. This reduces the complexity and avoids the fragmentation of the mass data. Cohesity can be provided as a service, as a self-managed solution, and through Cohesity partners.


 

Matching articles on the topic

Cybersecurity platform with protection for 5G environments

Cybersecurity specialist Trend Micro unveils its platform-based approach to protecting organizations' ever-expanding attack surface, including securing ➡ Read more

Data manipulation, the underestimated danger

Every year, World Backup Day on March 31st serves as a reminder of the importance of up-to-date and easily accessible backups ➡ Read more

Printers as a security risk

Corporate printer fleets are increasingly becoming a blind spot and pose enormous problems for their efficiency and security. ➡ Read more

The AI ​​Act and its consequences for data protection

With the AI ​​Act, the first law for AI has been approved and gives manufacturers of AI applications between six months and ➡ Read more

Windows operating systems: Almost two million computers at risk

There are no longer any updates for the Windows 7 and 8 operating systems. This means open security gaps and therefore worthwhile and ➡ Read more

AI on Enterprise Storage fights ransomware in real time

NetApp is one of the first to integrate artificial intelligence (AI) and machine learning (ML) directly into primary storage to combat ransomware ➡ Read more

DSPM product suite for Zero Trust Data Security

Data Security Posture Management – ​​DSPM for short – is crucial for companies to ensure cyber resilience against the multitude ➡ Read more

Data encryption: More security on cloud platforms

Online platforms are often the target of cyberattacks, such as Trello recently. 5 tips ensure more effective data encryption in the cloud ➡ Read more