HB Ad Slot
HB Mobile Ad Slot
Do I Really Have To? A Two-Part Framework for Determining if the EU AI Act Applies to You - Part 1: What’s Your Role?
Monday, June 10, 2024

This is the second installment in our series of client advisory articles in which we unpack the European Union’s (EU’s) Artificial Intelligence Act (AI Act) to help you evaluate if, when, and how you need to comply with the AI Act’s requirements.

Our first installment provided an overview of the AI Act, its risk-based regulatory approach and anticipated compliance timeline. See “The European Union Has Assigned Your AI Some Homework.” With that basic understanding, the immediate next question becomes: “Do I really have to?”

To answer this question requires an understanding of both your place in the AI lifecycle and the risk category your AI use case falls within. This article will focus on this lifecycle analysis, while our next article will discuss risk categories in more detail.

Let’s Start with the Basics: Is the AI Act Relevant to Your Business or Organization?

As discussed in our overview article, while the AI Act broadly defines AI and has a strong extraterritorial effect, it does not capture every AI use case into its regulatory orbit. Two preliminary questions can be clarified:

1. Do we develop, market, deploy, operate, or otherwise use technology that fits within the EU’s definition of Artificial Intelligence (AI)?

If you are reading this, then the answer is most likely yes. The AI Act broadly defines AI as a “machine-based system” that “infers from the input it receives how to generate output as predictions, content, recommendations, or decisions that can influence physical or virtual environments” and “may exhibit adaptiveness after deployment.” Article 3(1).

As a result, the AI Act’s definition covers a wide range of existing “smart” products, from the thermostat in your home to the algorithmic suggestions on your favorite shopping platform and, of course, the general-purpose AI you use instead of that search engine to find dad-jokes because the search engine kept serving you links to suspiciously free electronics.

Breaking the definition down to its basic parts, consider the technology and ask:

  • Is it machine-based? Remember, the AI Act contemplates human decision-making involved in AI system operation; indeed, in some instances, the AI Act requires human interventions. A human in the mix does not negate its possible classification as a regulated AI.
  • Does it respond to inputs (i.e., data)? Odds are that it does. 
  • Does it generate “predictions, content, and recommendations” or make “decisions”? Really, is there anything software-related that does not fall into this category? 
  • Can those outputs influence physical or “virtual” environments? Emphasis on the word “can,” as this does not mean it must.
  • Finally, does it exhibit adaptiveness after deployment? It doesn’t have to! The use of the word “may” in the definition indicates it is not required, although it is a strong indicator that something may qualify as an AI system.
  • Bonus Question: Is our AI capable of performing a wide range of tasks?

If you conclude it is indeed AI, then you now have a bonus question: Is it general-purpose AI? Focus on its capabilities, not how you intend to use or market it. It can still qualify as a general-purpose AI even if you intend to only use or market it for a specific purpose, as it does not matter how “the model is placed on the market.” Article 3(63). We will dig deeper into this when we cover the second part of our framework, which asks what risk classification your specific use-case and AI system fall into, in our next article in this series.

2. Do our AI-related operations intersect with the EU in any manner?

This is an intentionally vague question, as there are many ways in which an AI system or general-purpose AI model can reach the EU and thereby come within the scope of the AI Act.

For example, the AI Act automatically applies if you are based in the EU. It also automatically applies if you put the AI system on the EU market or “in service” there. But the AI Act as currently drafted also applies whenever mere outputs of the AI system are used in the EU or intended to be used there. This is even more expansive than the extra-territorial reach of the General Data Protection Regulation (GDPR) since it could incorporate AI systems that never formally entered the EU market (e.g., were not marketed, put into service, or otherwise put on the EU market).

There are open questions around how this extra-territorial component will ultimately be interpreted by EU regulators and the degree to which intent will be required (i.e., if one must intend for the AI output to be used in the EU for the Act to apply). Final language will be released any day now, and this final language will hopefully include some clarity on this point.

In any event, this extraterritoriality raises critical questions that organizations should consider when evaluating if and how the AI Act applies to them. Do you intend to use any of the outputs within the EU? Do you merely forward the outputs to EU customers for potential downstream use? If so, then the AI Act arguably applies to you – even if you are a US-centric business that is not formally marketing your AI system or putting it on the EU market.

If you are marketing or offering AI-related services in some way in the EU (by offering the AI system directly to customers or offering AI-enhanced services or products, for example), then the AI Act more clearly and directly applies.

The AI Act also covers any activity in which you are using AI to support your operations whenever there is a nexus with the EU market—such as AI-assisted monitoring and management of EU employees. The AI Act can even be a feature embedded within a broader tool rather than a standalone system. To effectively gauge relevancy, remember to inventory all AI features, tools, and systems within your business operations.

Next: Let’s Assume the AI Act is Relevant. How Does it Apply to You, Specifically?

You have determined that you use AI, and your AI-related operations intersect with the EU market. It's time to define your role-based responsibilities under the AI Act.

The AI Act creates regulatory obligations based on risk categories and relies on a shared responsibility model to ensure AI is comprehensively regulated throughout its lifecycle. See Recital 84. This enables the EU to impose role-specific obligations throughout the lifecycle, which vary depending on the type of AI system, organizational role, and risk level involved (the higher the risk, the greater the compliance obligations). While this analysis is multifaceted, it can be distilled to two key, context-specific questions:

1. The Role: Are We a Provider, Deployer, or Importer/Distributor?

The AI Act identifies three key sets of regulated entities: AI Providers, AI Deployers, and AI Distributors/Importers.

  • Provider: Individual or entity that develops an AI system or general-purpose AI model, or has one developed, and places it on the market or puts it into service under its own name or trademark (whether for payment or free of charge). Article 3(3).
  • Deployer: Individual or entity that uses an AI system under its authority (except in the use of personal, non-professional activity). Article 3(4).
  • Distributor/Importer: Any natural or legal person within the supply chain that:
    • Distributor: Makes an AI system available on the EU market (regardless of whether that distributor is located in the EU). Article 3(6).
    • Importer: Is located or established in the EU and places an AI system on the EU market with the name or trademark of a natural or legal person established outside the Union. Article 3(7).

To put it simply, AI ‘Providers’ are developers, ‘Deployers’ are users, and ‘Distributors/Importers’ are supply chain entities that help get an AI system on the EU market.

1(a). Stress-Test Questions for Role Verification

But what happens if your scenario is not clear-cut, such as when multiple parties are involved? How can you stress-test whether you are a Provider, Deployer, or Importer/Distributor?

Consider each AI system (including GPAI) and ask:

  • Source: Was the AI system built in-house, outsourced for development with design instructions and control (e.g., by outsourcing development to a third-party that built or customized it per our specifications), or sourced from a third-party service provider?
  • Timing and Purpose: Where do we sit within the AI product lifecycle, and what is my goal? Are we involved in its design? Its marketing and distribution? Its deployment, integration, and use? Where do we fit into the overall supply chain for this AI system?
    • Supply Chain Role: Are we facilitating the AI lifecycle by helping to market an AI system/GPAI model to the EU? E.g., by getting it on the market, putting it into service, or conveying outputs for downstream use in the EU?
  • Control: To what degree are we exercising control over the AI system’s design, training, or deployment and use? If the AI is from a third-party service provider, is it being deployed and used per the third-party’s instructions for use? In a modified form? Outside of the original provider’s instructions for use?

The answer will differ for each AI system because of the numerous factors involved. But it will become a critical component in identifying your role-based responsibilities and obligations under the AI Act. It also simultaneously defines the parameters that your organization should not exceed without adjusting its compliance expectations and strategy.

1(b). Beware of Scope Creep!

Operating within your AI role and establishing internal safeguards will be a critical component of your AI Act compliance strategy. This is especially true since the AI Act explicitly recognizes that roles can evolve over time and that it is necessary for “legal certainty” to clarify that there are certain conditions, particularly for high-risk systems, in which any “distributor, importer, deployer or other third-party” could become a “provider” and “assume all the relevant obligations.” Recital 84.

A Deployer that acts beyond its role as a user—for example, by deploying an AI system in circumstances outside those permitted by the Provider’s instructions for use (which can include use limitations to lower the system’s risk classification)—can become a Provider under the Act and unintentionally assume all the related responsibilities. Article 25(b).

Alternatively, parties may play more than one role at a time and be cumulatively responsible for all obligations associated with them (e.g., acting as both a distributor and an importer simultaneously). Recital 83. While contract language between the parties can offer additional protection by defining one’s role and responsibilities, it can be outweighed whenever a party exceeds those parameters in practice. Article 25, Recital 84.

2. The Risk: How risky is our AI system and use case?

You have identified and defined your role(s); next is calculating the risk classification for your intended use case and AI system. Since the AI Act uses a risk-based framework to define role-based responsibilities, the intersection of that two-part analysis will reveal the scope of your compliance obligations under the AI Act. The greater your control over the AI system, the greater your obligations, particularly if the AI system falls higher on the risk spectrum.

Interested in learning more about how to calculate the risk classification for your AI system and use case? Keep an eye out for Part II of this AI Act application framework in our next client advisory article, coming soon!

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins