AI Coming To The EU: EU Artificial Intelligence Act's Recent Publication, Next Steps


Highlights

The European Union’s Artificial Intelligence Act establishes the first comprehensive horizontal legal framework for regulating AI systems across the EU

The EU AI Act will enter into force on Aug. 1, 2024, with the majority of its provisions becoming enforceable on Aug. 2, 2026

The Act has broad extraterritorial implications, extending its reach to providers who put AI systems into service within the EU market, regardless of their location, therefore potentially including U.S. businesses

The European Union’s Artificial Intelligence Act, Regulation (EU) 2024/1689, was published on July 12 in the Official Journal of the European Union. This marks the establishment of the first comprehensive horizontal legal framework for regulating AI systems across the EU. The EU AI Act will enter into force on Aug. 1, 2024, with the majority of its provisions becoming enforceable on Aug. 2, 2026.

While the compliance timeline may appear extended, the process of developing an AI compliance program is intricate and time-intensive. It is imperative for businesses to commence their compliance efforts promptly to ensure they are adequately prepared to meet the regulatory requirements.

This landmark legislation, which has been in negotiation and development phases since 2021, has undergone extensive revisions, aiming to create a harmonized legal environment for the creation, marketing, deployment, and use of AI systems throughout the EU. The Act has broad extraterritorial implications, extending its reach to providers who place or put AI systems into service within the EU market, regardless of their location. Thus, a number of U.S. businesses will be within its scope, depending on the exact role in their use of AI. It also applies to providers or deployers established outside the EU if the AI systems output is used within the EU.

The Act covers deployers, importers, and affected individuals within the EU, though it lacks clarity regarding distributors. Certain exemptions are specified within the Act. It does not apply to AI systems developed and used solely for scientific research and development. Activities involving research, testing, and development of AI are exempt from the Act’s provisions until the AI is placed on the market or put into service, although real-world testing is not covered by this exemption. AI systems released under free and open-source licenses are also exempt unless they are classified as high risk, prohibited, or generative AI.

The EU AI Act adopts a risk-based approach, assigning different regulatory requirements based on the level of risk associated with AI systems.

Medical Uses

AI intended for medical purposes is already regulated as a medical device in Europe and the United Kingdom. It must undergo a thorough assessment before being marketed, in accordance with the EU Medical Device Regulations 2017 (MDR) and the EU In Vitro Diagnostic Medical Devices Regulation (IVDR). Under the Act, any AI system that is a Class IIa or higher medical device, or uses an AI system as a safety component, is defined as high risk.

High-risk AI systems will need to adhere to a comprehensive set of additional requirements, many of which align with the stringent conformity assessment standards currently mandated by the MDR and IVDR. The Act permits medical device notified bodies to conduct AI conformity assessments, provided their AI competence has been evaluated under the MDR and IVDR. This implies a unified declaration of conformity, although the exact implementation details remain unclear.

Recent Updates

The European Commission has established a new EU level regulator, the European AI Office, which will operate within the Directorate-General for Communication Networks Content and Technology.

The AI Office will be responsible for overseeing and enforcing compliance with the AI Act’s requirements for general purpose AI (GPAI) models and systems across all 27 EU member states. Its duties will include monitoring emerging systemic risks associated with GPAI development and deployment, conducting evaluations of capabilities and models, and investigating potential cases of infringement and non-compliance. To assist GPAI model providers in achieving compliance, the AI Office will develop voluntary codes of practice, adherence to which will offer a presumption of conformity.

The AI Office will spearhead international cooperation on AI matters, strengthen connections between the European Commission and the scientific community – including the forthcoming scientific panel of independent experts – and support joint enforcement actions among member states. It will serve as the secretariat for the AI Board, which coordinates efforts among national regulators. It will also facilitate the establishment of regulatory sandboxes to allow companies to test AI systems in controlled environments and provide information and resources to small and medium-sized enterprises to aid their compliance efforts.

Timeline of Developments

With the publications in the Official Journal, the dates to comply with the regulations are now confirmed. Here is what to expect:

Next Steps

Once the Act becomes operative on Aug. 1, these milestones will follow according to Article 113. The Codes of Practice must be finalized within nine months of the Act’s commencement according to Article 56. The European Commission will then have an additional three months, for a total of 12 months, to approve or reject these Codes via an implementing act, based on the advice of the AI Office and Board. The AI Act also mandates the AI Office to facilitate the “frequent review and adaptation of the Codes of Practice.” Given that the standardization process will exceed the timelines set by the AI Act, these Codes of Practice for general purpose AI model providers will be instrumental in ensuring the effective implementation of the regulation.


© 2024 BARNES & THORNBURG LLP
National Law Review, Volume XIV, Number 214