NVIDIA® H200 Instances and Clusters Available

What is the EU AI Act and how does it impact cloud GPU processing?

10 min read
What is the EU AI Act and how does it impact cloud GPU processing?

If you’re building an AI-powered product with customers in the European Union, it’s important to understand the regulations governing artificial intelligence and data protection within the EU. 

Two key pieces of legislation, the EU AI Act and the General Data Protection Regulation (GDPR), have significant implications for the development and deployment of machine learning (ML) systems within the European Union. 

Having a cloud provider (eg. AWS or Azure) hosted outside of the EU doesn't make things easier. It is crucial to understand the potential conflicts between EU data protection regulations and the regulations of the country where your cloud provider is based to ensure compliance and safeguard data privacy.

The EU AI Act

The EU AI Act, which is set to become the first comprehensive legal framework for AI, adopts a risk-based approach to regulating AI systems. It classifies AI systems into four categories based on their level of risk and sets out specific requirements and obligations for each category. The EU AI Act will impact any AI tools or providers whose products are used in the European Union. 

The EU AI Act was approved by the European Parliament on March 13, 2024 and is set to become law shortly after the European Council conducts a final review. However, the law is expected to be implemented gradually, with various provisions taking effect at different times until 2026.  

Penalties for noncompliance can reach EUR 35,000,000 or 7% of a company's annual worldwide revenue, whichever is higher. 

GDPR 

 General Data Protection Regulation (GDPR), which has been in effect since 2018, lays down rules for processing personal data and grants individuals certain rights over their data. While GDPR doesn’t expressly outline artificial intelligence use cases, the rules apply to any processing of personally identifiable data across the European Union. 

 Under the General Data Protection Regulation (GDPR), wrongful infringements can be fined up to €20 million or 4% of the annual worldwide turnover, whichever is greater. 

Who does the EU AI Act apply to?  

The EU AI Act applies to a wide range of entities involved in the development, deployment, and use of AI systems within the European Union. The Act establishes a comprehensive legal framework for AI, focusing on a risk-based approach to regulate AI systems according to their potential impact on individuals and society. 

The EU AI Act applies to: 

  1. Providers of AI systems: Those who develop AI systems or have them developed with the purpose of placing them on the market or putting them into service under their own name or trademark, regardless of whether they are established within the EU or in a third country. 

  2. Users of AI systems: Any natural or legal person, public authority, agency, or other body using an AI system under its authority. 

  3. Importers and distributors of AI systems: Those who place AI systems on the market or put them into service within the EU, as well as those who make substantial modifications to AI systems already placed on the market or put into service. 

  4. Institutions, agencies, and bodies of the EU: When they fall within the scope of the AI Act. 

The EU AI Act applies to both public and private organizations, startups as well as established companies building foundational AI models. It covers a wide range of AI systems, from those posing unacceptable risks (which are prohibited) to high-risk, limited-risk, and minimal-risk systems, each with their own set of requirements and obligations. 

Exceptions 

AI systems developed or used solely for military purposes, as well as those used for national security, border control, and law enforcement, are excluded from the Act's application, as they are subject to separate legal frameworks. Additionally, AI systems that are components of products already covered by specific sectoral legislation, such as medical devices, aviation, or motor vehicles, are exempt from the Act's requirements. Finally, AI systems used in a personal, non-professional capacity, like AI-powered personal assistants or home automation systems, also fall outside the purview of the EU AI Act. 

A risk-based approach to AI regulation 

The EU AI Act is all about managing the risks associated with AI systems. It does this by dividing AI systems into four categories based on how much of an impact they could have on people and society: 

High-risk AI systems 

Anything that impacts on people's safety or fundamental rights are classified as high-risk. These include systems used as safety components of products, as well as those deployed in specific areas such as law enforcement, education, and employment. High-risk AI systems are subject to strict requirements before they can be placed on the EU market. 

 If you're working on a high-risk AI system, you'll need to make sure it ticks all these boxes before you can release it in the EU: 

Limited risk AI systems 

The EU AI Act classifies certain AI systems as limited-risk due to their potential to deceive or manipulate users through a lack of transparency. These limited-risk AI systems include: 

For these limited-risk AI systems, the EU AI Act imposes specific transparency obligations to ensure that users are aware they are interacting with an AI system and can make informed decisions.   Providers of limited-risk AI systems must: 

The impact of these transparency requirements is to increase user awareness and mitigate the potential for deception or manipulation. By ensuring that users know they are interacting with an AI system or consuming AI-generated content, the EU AI Act aims to promote informed decision-making and protect users from being misled. 

However, compared to high-risk AI systems, limited-risk systems are subject to fewer requirements and obligations under the Act. Providers of limited-risk AI systems do not need to comply with the more stringent requirements related to risk management, testing, technical robustness, data governance, human oversight, and cybersecurity that apply to high-risk systems. 

Minimal risk AI systems 

For these minimal-risk AI systems, the EU AI Act does not impose any additional mandatory requirements or obligations on providers beyond those already applicable under existing legislation, such as consumer protection laws or product liability rules. 

 However, the Act does encourage the development of voluntary codes of conduct for providers of minimal-risk AI systems. These codes of conduct are intended to promote best practices, ethical standards, and transparency in the development and deployment of minimal-risk AI systems. 

GPAI models and systemic risks 

General Purpose AI (GPAI) models, also known as foundation models, are versatile AI systems that can be adapted to perform a wide range of tasks across various domains. While these models play a key role in AI’s advancement, the EU AI Act considers them to pose unique challenges and systemic risks. 

Concentration of power 

The concern the EU raises is that the development of GPAI models is concentrated among a few large tech companies, which could lead to a concentration of power and influence over a wide range of applications and sectors. 

The EU AI Act considers GPAI models trained using a total computing power exceeding 10^25 FLOPs to be presumed to pose systemic risks. Providers of such models will be required to notify the European Commission when this threshold is met. 

Amplification of biases and discrimination 

Another concern the EU has is that GPAI models trained on vast amounts of data may inadvertently amplify societal biases and discrimination, leading to unfair outcomes across multiple domains. 

Opacity and lack of accountability 

According to some views, the complex and often opaque nature of GPAI models can make it difficult to understand how they arrive at specific outputs, leading to a lack of accountability for their decisions and actions. 

Potential for misuse and dual-use applications 

GPAI models could be misused or adapted for malicious purposes, such as generating disinformation, manipulating public opinion, or enabling surveillance. 

The EU AI Act's Approach to GPAI Models 

To address the systemic risks associated with GPAI models, the EU AI Act introduces specific rules for providers and users of these systems: 

By establishing a regulatory framework for GPAI models, the EU AI Act aims to promote greater transparency, accountability, and oversight, while ensuring that these powerful AI systems are developed and used in a responsible, trustworthy, and human-centric manner. 

Impact of EU AI Act on Cloud GPU Processing 

At the time of writing the EU AI Act has not yet been put into effect, so the full extent of how it impacts cloud GPU processing (and data processors) in the European Union is unclear. 

What's also unclear is how the EU AI Act will balance out with the CLOUD Act implemented in the United States, and other international AI regulation that is still in the review process.

Just like with the introduction of GDPR, we do expect the full scope and consequences of the EU AI Act to be better defined as more companies and legal experts bring in their views. 

If you are looking for a cloud GPU partner with datacenters within the European Union, look no further than DataCrunch. We are based in the European Union and 100% governed under EU legislation. We are fully GDPR compliant as well as ISO 27001 certified.

While you can’t predict the future, you can take steps to be compliant and ready for what is coming!