HB Ad Slot
HB Mobile Ad Slot
The ReAIlity of What an AI System Is – Unpacking the Commission’s New Guidelines
Thursday, February 20, 2025

The European Commission has recently released its Guidelines on the Definition of an Artificial Intelligence System under the AI Act (Regulation (EU) 2024/1689). The guidelines are adopted in parallel to commission guidelines on prohibited AI practices (that also entered into application on February 2), with the goal of providing businesses, developers and regulators with further clarification on the AI Act’s provisions.

Key Takeaways for Businesses and AI Developers

Not all AI systems are subject to strict regulatory scrutiny. Companies developing or using AI-driven solutions should assess their systems against the AI Act’s definition. With these guidelines (and the ones of prohibited practices), the European Commission is delivering on the need to add clarification to the core element of the act: what is an AI system?

The AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment. The system, for explicit or implicit objectives, infers from input data how to generate outputs – such as predictions, content, recommendations or decisions – that can influence physical or virtual environments.

One of the most significant clarifications in the guidelines is the distinction between AI systems and “traditional software.”

  • AI systems go beyond rule-based automation and require inferencing capabilities.
  • Traditional statistical models and basic data processing software, such as spreadsheets, database systems and manually programmed scripts, do not qualify as AI systems.
  • Simple prediction models that use basic statistical techniques (e.g., forecasting based on historical averages) are also excluded from the definition.

This distinction ensures that compliance obligations under the AI Act apply only to AI-driven technologies, leaving other software solutions outside of its scope.

Below is a breakdown of what the guidelines bring for each of the seven components:

  1. Machine-based systems – AI systems rely on computational processes involving hardware and software components. The term “machine-based” emphasizes that AI systems are developed with and operate on machines, encompassing physical elements such as processing units, memory, storage devices and networking units. These hardware components provide the necessary infrastructure for computation, while software components include computer code, operating systems and applications that direct how the hardware processes data and performs tasks. This combination enables functionalities like model training, data processing, predictive modeling, and large-scale automated decision-making. Even advanced quantum computing systems and biological or organic systems qualify as machine-based if they provide computational capacity.
  2. Varying levels of autonomy – AI systems can function with some degree of independence from human intervention. This autonomy is linked to the system’s capacity to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. The AI Act clarifies that autonomy involves some independence of action, excluding systems that require full manual human involvement. Autonomy also spans a spectrum – from systems needing occasional human input to those operating fully autonomously. This flexibility allows AI systems to interact dynamically with their environment without human intervention at every step. The degree of autonomy is a key consideration for determining if a system qualifies as an AI system, impacting requirements for human oversight and risk-mitigation measures.
  3. Potential adaptiveness – Some AI systems change their behavior after deployment through self-learning mechanisms, though this is not a mandatory criterion. This self-learning capability enables systems to automatically learn, discover new patterns or identify relationships in the data beyond what they were initially trained on.
  4. Explicit or implicit objectives – The system operates with specific goals, whether predefined or emerging from its interactions. Explicit objectives are those directly encoded by developers, such as optimizing a cost function or maximizing cumulative rewards. Implicit objectives, however, emerge from the system’s behavior or underlying assumptions. The AI Act distinguishes between the internal objectives of the AI system (what the system aims to achieve technically) and the intended purpose (the external context and use-case scenario defined by the provider). This differentiation is crucial for regulatory compliance, as the intended purpose influences how the system should be deployed and managed.
  5. Inferencing capability – AI systems must infer how to generate outputs rather than simply executing manually defined rules. Unlike traditional software systems that follow predefined rules, AI systems reason from inputs to produce outputs such as predictions, recommendations or decisions. This inferencing involves deriving models or algorithms from data, either during the building phase or in real-time usage. Techniques that enable inference include machine learning approaches (supervised, unsupervised, self-supervised and reinforcement learning) as well as logic- and knowledge-based approaches.
  6. Types of outputs – AI systems generate predictions, content, recommendations or decisions that shape both their physical and virtual environments. Predictions estimate unknown values based on input data; content generation creates new materials like text or images; recommendations suggest actions or products; and decisions automate processes traditionally managed by human judgement. These outputs differ in the level of human involvement required, ranging from fully autonomous decisions to human-evaluated recommendations. By handling complex relationships and patterns in data, AI systems produce more nuanced and sophisticated outputs compared to traditional software, enhancing their impact across diverse domains.
  7. Environmental influence – Outputs must have a tangible impact on the system’s physical or virtual surroundings, exposing the active role of AI systems in influencing the environment they operate within. This includes interactions with digital ecosystems, data flows and physical objects, such as autonomous robots or virtual assistants.

Why These Guidelines Matter

The AI Act introduces a harmonized regulatory framework for AI developed or used across the EU. Core to its scope of application is the definition of “AI system” (which then spills over onto the scope of regulatory obligations, including restrictions on prohibited AI practices and requirements for high-risk AI systems).

The new guidelines serve as an interpretation tool, helping providers and stakeholders identify whether their systems qualify as AI under the act. Among the key takeaways is the fact that the definition is not to be applied mechanically, but should consider the specific characteristics of each system.

AI systems are a reA(I)lity; if you have not started assessing the nature of the one you develop or the one you procure, now is the time to do so. While the EU AI Act might be considered by many as having missed its objective (a human-centric approach to AI that fosters innovation and sets a level playing field), it is here to stay (and its phased application is on track).

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters