HB Ad Slot
HB Mobile Ad Slot
The EU Artificial Intelligence (Al) Act: A Pioneering Framework for AI Governance
Wednesday, April 17, 2024

On March 13th, 2024, the European Parliament approved the EU Artificial Intelligence (AI) Act, marking a significant milestone in AI governance. This comprehensive regulation aims to ensure responsible AI development, protect fundamental rights, and foster innovation, allowing stakeholders time to align with its requirements while promoting responsible AI development within the European Union.

Of course, this aspiration is generally true of AI governance initiatives being rolled out in both the public and private sectors. The words are familiar. The values are routinely espoused by a wide range of stakeholders.

But the AI Act is truly a pivotal milestone. In adopting the AI Act the EU has again taken a leadership role in technology regulation - staking out a reference point that, at least for the next few years, will frame the discussion of how the United States, other governments, and companies building and using AI models will consider AI governance and regulatory tools.

In adopting the AI Act the EU has again taken a leadership role in technology regulation - staking out a reference point that, at least for the next few years, will frame the discussion of how the United States, other governments, and companies building and using AI models will consider AI governance and regulatory tools.

Similar to GDPR, the AI Act’s scope is broad, covering all AI systems that are sold, offered, put into service or used within the EU. Providers or deployers of AI systems outside of the EU are captured by the AI Act if the results of their system are used in the EU. Companies based in the EU that provide AI systems are captured even if they do not deploy their systems in the EU. There are certain limited exceptions for personal and research use.

Key aspects of the AI Act include:

1. Broad Definition of Regulated AI Systems

The AI Act broadly defines a regulated AI System:

A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

2. A Tiered, Risk-Based Approach to Regulation

The AI Act categorizes AI systems based on four tiered risk levels: unacceptable, high, limited, and minimal or no risk. 

As risks associated with a particular category of system rises, stricter rules apply. 

Unacceptable AI Systems:  Certain models that violate fundamental EU rights are banned. 

Examples of prohibited AI systems include:

AI Systems That Deploy Subliminal and Manipulative Techniques

  • Systems that subtly influence behavior or decision-making fall into this category. Such techniques can be harmful and undermine individual autonomy.
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins