Publications
Article

Understanding Key Definitional Concepts Under the EU AI Act

Bloomberg Law

systems. First proposed in April 2021, the EU AI Act went through several years of drafting and negotiation before passage, reflecting the rapid evolution and competing policy goals that mark AI as a transformative technology.

Like the General Data Protection Regulation (GDPR) before it, the EU AI Act takes a prescriptive approach toward regulation of digital technology. The law, like the GDPR, has extraterritorial effect, governing the use of any AI technologies in the EU even by companies located wholly outside the Union. It also carries the potential for significant fines, reaching up to $35 mm EU or 7% of worldwide turnover. Just as the GDPR became a model for worldwide data protection laws when it became effective in May 2018, the EU AI Act is very likely to influence the global development of AI law.

For US companies using or planning to use AI systems, compliance with the EU AI Act will turn on a several key definitional concepts. The EU AI Act prescribes different levels of compliance, depending on whether an entity develops or deploys AI and the inherent risk associated with the AI system at issue. This article explores these key definitional questions and the legal obligations that flow from each definitional category.

EU Artificial Intelligence Act Structure

The EU AI Act takes a risk-based approach to regulating AI systems. AI systems are organized into four distinct tiers, aligning with the inherent risks posed by the systems at issue. An entity's compliance obligations are determined in large part by the category of risk associated with the AI tool. The risk categories are:

• Unacceptable Risk. AI that poses an actual threat to individuals and their freedoms, such as tools used for cognitive behavioral manipulation; social scoring, and large-scale real-time tracking. Use of AI systems that fall into this category is strictly prohibited.

• High-Risk AI. AI that has the capacity to negatively affect the safety or fundamental rights of consumers, such as systems used in the context of mass transportation, medical devices, children's toys, management of critical infrastructure, employment, and law enforcement. Use of High-Risk AI systems is subject to judicial or other independent body authorization along with transparency, security, and risk assessment obligations.

• Limited Risk AI. AI systems where the primary risk to individuals comes from a lack of transparency regarding the use of the AI system, such as the use of chatbots. Use of Limited Risk AI systems is generally permitted when fully disclosed to consumers.

• Minimal or No Risk AI. AI systems that pose minimal risks to the rights and freedoms of individuals, such as AI enabled video games or spam filters. It is expected that the vast majority of AI systems currently used in the EU fall into this category. Use of Minimal or No Risk AI systems is generally permitted without enhanced restrictions.

In addition to these categories, there are separate requirements for “general purpose” AI (GPAI) such as ChatGPT, whose use may fall under any of the above risk categories. These additional GPAI requirements fall primarily on the providers of GPAI and include mandatory technical disclosures and certain copyright protections.

The EU AI Act's risk-based approach reduces the barrier to entry for entities seeking to implement low impact AI tools. At the same time, it ensures that AI tools used in more impactful contexts are subject to heightened oversight and regulatory authorization. Regardless of the AI implemented, the first step for any entity should be to conduct and document an assessment to identify the potential risks and categorize the tool at issue.

Extraterritorial Application

Like the GDPR and other EU laws focused on digital products and services, the EU AI Act applies to many entities based outside the EU. Specifically, the Act applies to:

• Providers and deployers of AI systems established in a country outside of the EU if the output produced by the AI system is used in the EU; and

• Providers of AI systems and GPAI models that have an effect on the EI, either by placing them on the EU market or putting the AI system into service in the EU.

Critically, entities based outside the EU must determine whether their AI generate outputs that may touch the EU market. This second prong is likely to have significant impact on contracting and terms of AI tools, regardless of their location of use.

Penalties for Non-Compliance

Again, much like the GDPR, the EU AI Act carries significant penalties for non-compliance. Penalties vary based on the type and significance of the breach, and can reach up to €35 million or 7% of global turnover.

Key Questions for Self-Classification Under the EU AI Act

There are a number of key questions that entities must consider in determining the scope of their EU AI Act compliance obligations. This section explores these key issues.

Provider or Deployer?

The first question companies should address is whether they qualify as a “provider” or “deployer” of AI. The EU AI Act creates separate obligations for “providers” and “deployers” of AI:

• A provider is an entity that develops an AI system or a GPAI model, or that has an AI system or model made on its behalf and places such a system on the market or into service under its own name or trademark.

• A deployer is an entity that uses an AI system under its own authority in a business or other non-personal context. The term would include any for-profit or non-profit business that licenses AI and uses that AI technology internally or externally.

Generally, providers’ obligations under the EU AI Act are much more demanding, and involve implementation of technical standards as well as disclosures regarding the development and training of an AI tool. Deployers, on the other hand, are tasked with more use-based transparency and control requirements that are generally less onerous than the testing and disclosure obligations that providers face.

Most companies using or considering the use of AU likely qualify as deployers. These companies typically license AI like ChatGPT or Microsoft Co-Pilot for internal or external usage, but aren't otherwise developing their own AI. A more complicated characterization question can arise where companies license off-the-shelf AI, which the company then customizes for usage. Doing so can have the effect of causing an entity to shift deployer to provider, requiring materially different compliance obligations.

Operations in EU?

As noted, the EU AI Act has extraterritorial application, which in some ways is more onerous than even the GDPR. For example, while the GDPR applies to US companies that lack physical operations in the EU, the law only has such an effect where US companies target EU citizens with goods and services. The EU AI Act by contrast doesn't have an analogous targeting criterion and will apply to US companies that deploy AI whose outputs are used in the EU.

It is not clear yet what test governs the extent to which an AI output may be deemed to be “in use” in the EU. However, for many US based companies, there are some common uses of AI that likely fall within the scope of the EU AI Act. For example, use of chat-bots that EU website visitors interact with, or AI used to screen EU-based applicants for employment likely trigger EU AI Act compliance.

Prohibited AI?

A key question for US companies to consider is whether they are developing or deploying AI systems that the EU AI Act characterizes as presenting an unacceptable risk to the fundamental rights of individuals. These AI systems are prohibited under the Act. Prohibited AI systems deploy subliminal techniques beyond a person's consciousness or use manipulative or deceptive techniques materially distorting the person's behavior. Such systems include:

• Cognitive behavioral manipulation of people or vulnerable groups;

• Social scoring (i.e., classifying people based on behavior, socio-economic status or personal characteristics);

• Real-time and remote biometric identification systems, such as facial recognition.

Entities that supply or use prohibited AI systems may be subject to the highest tier of administrative fines. The ban on prohibited AI becomes effective six (6) months after the publication of the AI Act in the Official Journal, which is expected to occur shortly.

High-Risk AI?

Companies should evaluate whether the AI systems they develop or deploy are considered “high risk” under the Act. High-risk AI systems are systems creating a significant risk of harm to the health, safety, or fundamental rights of individuals. High-risk AI systems include : (1) AI systems used in products falling under the EU's product safety legislation (e.g., toys, aviation, cars, medical devices); and (2) AI systems required to undergo a conformity assessment before commercial use.

The EU AI Act subjects high-risk AI systems to significant compliance obligations. Among other requirements, companies that use high-risk AI systems must establish, implement, document, and maintain a risk management system throughout the high-risk AI system's lifecycle and maintain a data governance program to ensure that training, validation, and testing datasets are relevant, sufficiently representative and, to the extent possible, free of errors and complete according to the intended purpose. High-risk AI systems must also be designed to allow deployers to implement human oversight and achieve appropriate levels of accuracy, robustness, and cybersecurity. EU AI Act requirements for high-risk AI systems also vary based on whether an entity is a provider or deployer of such system.

The EU AI Act requires that providers of high-risk AI systems meet a number of pre-market and post-market requirements. These requirements include establishing a quality management system to ensure compliance with the EU AI Act, registering the high-risk AI system, and retaining specified documentation for at least ten years from when the AI system was installed or offered for sale in the EU. Post-market obligations include retaining logs generated by the AI system for at least six months, to the extent the logs are within their control, establishing a post-marketing monitoring system, and taking immediate corrective action where the AI system does not conform with the EU AI Act, including to withdraw, disable, or recall it.

Deployers of high-risk AI systems must take appropriate technical and organizational measures to ensure that: (1) use of such systems is in accordance with the instructions of use from the providers, (2) those assigned to ensure human oversight of the system have the necessary competence, training, and authority as well as the necessary support, and (3) the input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system, to the extent the deployer has control over the input data. Deployers must also inform providers when they have reasons to consider that the use may result in the system presenting certain adverse risks and suspend the use of the system and maintain system logs for at least six months if the logs are under the control of the deployer. The obligations relating to high-risk AI systems will become effective around July 2027.

General Purpose AI (GPAI) Models?

GPAI models are those that display significant generality, have the capability to serve a wide range of distinct tasks, and can be integrated into other AI systems. There are two categories of GPAI models: GPAI models with systematic risks, and GPAI models without such risks. A GPAI model has systemic risk of it has a significant impact due to its reach in the EU market or actual or reasonably foreseeable negative effects on public, health, safety, public security, fundamental rights or society as a whole. GPAI models that are trained using a defined threshold of computing power (10^25 floating-point operations per second) are presumed to have systematic risks.

All providers of GPAI models must create and maintain technical documentation, including the training and testing process, and provide such documentation to competent national authorities upon request, make available certain technical documentation to AI system providers who intend to integrate the GPAI model into their systems, including instructions for use and acceptable use policies, maintain policies regarding compliance with copyright laws, and publicly release a sufficiently detailed summary of the data used to train the GPAI model using a template prescribed by the EU AI Office.

Providers of GPAI models with systematic risk have additional obligations to control and document risks. These obligations include evaluating the GPAI model to identify and mitigate systemic risk, document and report any serious incidents, including corrective measures, to the EU AI Office, and ensure adequate cybersecurity protection. Providers of GPAI models with systematic risk must also notify the EU Commission that the GPAI model presents a systemic risk. The obligations relating to GPAI models will become effective around July 2025.

Minimal Risk AI?

Minimal risk AI systems are those that do not fall within any of the other categories of AI systems and are not regulated by the EU AI Act.

AI Governance

Interplay Between EU AI Act & US AI Law

In the same way that the GDPR heavily influenced data protection laws throughout the world, it is likely that the EU AI Act will become the standard for many countries’ AI legislation. The effects in the US are already apparent. Colorado recently passed the US's first comprehensive AI legislation, the Colorado AI Act. Like the EU AI Act, the Colorado AI Act imposes obligations on entities for high-risk AI systems based on the entity's role (i.e., developer or deployer).

Over a quarter of US state legislatures have considered some form of AI regulation in the last year. In the absence of federal legislation, the EU AI Act and the Colorado AI Act will likely spur other states to push through AI laws over the next few years.

Using the EU AI Act to Develop Best Practices

A key question for US companies developing or deploying AI is how to manage risk in an evolving regulatory environment. With so many different laws being proposed to address AI—as well as numerous existing laws that already govern AI—companies need to develop a framework for the implementation of AI in a manner that is likely to comply with emerging legal standards. Centering AI governance around the EU AI Act is an effective way to ensure overall compliance because of the scope and likely influence of the EU AI Act on the development of global standards.

The following is a set of best practices that companies can use to govern the development and usage of AI, while aligning with he EU AI Act's requirements.

• Identify the types of AI systems being developed or used by the entity and assign an EU AI Act risk category (e.g., prohibited, high-risk, GPAI models or minimal risk).

• Identify what role the entity plays regarding the AI system (e.g., provider or deployer).

• Develop or update existing AI governance policies and procedures against an internationally recognized AI risk management framework.

• Analyze existing policies and procedures against the EU AI Act requirements considering the risk category and the entity's role with respect to the AI system, including risk management, recordkeeping, human oversight, copyright, and data governance and protection. Develop a plan to address any identified gaps.

• Establish regular audits to ensure ongoing compliance with the EU AI Act.

• Provide training to employees on the EU AI Act based on their roles with respect to the AI system.

• Designate an individual to oversee compliance with the EU AI Act for human oversight for the AI system.

• Monitor for additional rules issued by the EU Commission, which are likely to become models for global rules.

Conclusion

Compliance with the EU AI Act requires that covered entities undergo a series of self-classification exercises, focused initially on whether the entity develops or deploys AI and secondarily on the type of AI systems being implemented. Understanding this classification scheme is a key undertaking for US entities using or considering the use of AI in the EU marketplace. Given the likely impact of the EU AI Act on the development of global AI law, mastering these definitional concepts is not only a key to compliance with the Act, but a cornerstone of AI governance generally.

"Understanding Key Definitional Concepts Under the EU AI Act," by Philip N. Yannella, Sharon R. Klein, Alex C. Nisenbaum, and Karen H. Shin, with assistance from Timothy W. Dickens, was published in Bloomberg Law on August 1, 2024.

Copyright 2024 Bloomberg Industry Group, Inc. (800-372-1033) Understanding Key Definitional Concepts Under the EU AI Act. Reproduced with permission.