HB Ad Slot
HB Mobile Ad Slot
The EU Artificial Intelligence Act: What’s The Impact?
Monday, August 7, 2023

The EU Artificial Intelligence Act (or “AI Act”) is the world’s first legislation to regulate the use of AI. It leaves room for “technical soft law”; but, inevitably (being the first and being broad in scope), it will set principles and standards for AI development and governance. The UK is concentrating more on soft law, working towards a decentralized principle-based approach. The US and China are working on their own AI regulations, with the US focusing more on soft law, privacy, and ethics and China on explainable AI algorithms, aiming for companies to be transparent about their purpose. The AI Act marks a crucial step in regulating AI in Europe, and a global code of conduct on AI could harmonize practices worldwide, ensuring safe and ethical AI use. This article gives an overview of the EU Act, its main aspects as well as an overview of other AI legislative initiatives in the European Union and how these are influencing other jurisdictions, such us the UK, the US and China.

IN DEPTH


The AI Act: The First AI Legislation. Other Jurisdictions Are Catching Up.

On June 14, 2023, the European Parliament achieved a significant milestone by approving the Artificial Intelligence Act (or “AI Act”), making it the world’s first piece of legislation to regulate the use of artificial intelligence. This approval has initiated negotiations with the Council of the European Union which will determine the final wording of the Act. The final version of the AI Act is expected to be published by the end of 2023. Following this, the Regulation is expected to be fully effective in 2026. A two-year grace period similar to the one contemplated by the GDPR is currently being considered. This grace period would enable companies to adapt gradually and prepare for the changes until the rules come into force.

As the pioneers in regulating AI, the European institutions are actively engaged in discussions that are likely to establish both de facto (essential for the expansion and growth of AI businesses, just like any other industries) and de jure (creating healthy competition among jurisdictions) standards worldwide. These discussions aim to shape the development and governance of artificial intelligence, setting an influential precedent for the global AI community.

Both the United States and China are making efforts to catch up. In October 2022, the US government unveiled its “Blueprint for an AI Bill of Rights,” centered around privacy standards and rigorous testing before AI systems become publicly available. In April 2022, China followed a similar path by presenting a draft of rules mandating chatbot-makers to comply with state censorship laws.

The UK government, has unveiled an AI white paper to provide guidance on utilizing artificial intelligence in the UK. The objective is to encourage responsible innovation while upholding public confidence in this transformative technology.

While the passage of the Artificial Intelligence Act by the European Parliament represents an important step forward in regulating AI in Europe (and indirectly beyond, given the extraterritorial reach), the implementation of a global code of conduct on AI is also under development by the United Nations and is intended to play a crucial role in harmonizing global business practices concerning AI systems, ensuring their safe, ethical, and transparent use.

A Risk-Based Regulation

The European regulatory approach is based on assessing the risks associated with each use of artificial intelligence.

Complete bans are contemplated for intrusive and discriminatory uses that pose unacceptable risk to citizens’ fundamental rights, their health, safety, or other matters of public interest. Examples of artificial intelligence applications considered to carry unacceptable risks include cognitive behavioral manipulation targeting specific categories of vulnerable people or groups, such as talking toys for children, and social scoring, which involves ranking of people based on their behavior or characteristics. The approved draft regulation significantly expands the list of prohibitions on intrusive and discriminatory uses of AI. These prohibitions now include:

  • biometric categorization systems that use sensitive characteristics like gender, race, ethnicity, citizenship status, religion, political orientation, commonly known as social scoring;

  • “real-time” and “a posteriori” remote biometric identification systems in publicly accessible spaces, with an exception for law enforcement using ex post biometric identification for the prosecution of serious crimes, subject to judicial authorization. However, a complete ban on biometric identification, might encounter challenges in negotiations with the European Council, as certain Member State police forces advocate for its usage for law enforcement activities, where strictly necessary;

  • predictive policing systems based on profiling, location or past criminal behavior;

  • emotion recognition systems used in the areas of law enforcement, border management, workplaces and educational institutions;

  • untargeted extraction of biometric data from the Internet or CCTV footage to create facial recognition databases.

In contrast, those uses that need to be “regulated” (as opposed to simply banned) through data governance, risk management assessment, technical documentation, and criteria for transparency, are:

  • high-risk AI systems, such as those used in critical infrastructure (e.g., power grids, hospitals, etc.), those that help make decisions regarding people’s lives (e.g., employment or credit rating), or those that have a significant impact on the environment; and

  • foundation models, under the form of Generative AI systems (such as, for example, the highly celebrated ChatGPT) and Basic AI models.

High-Risk AI systems are artificial intelligence systems that may adversely affect security or fundamental rights. They are divided into two categories:

  • Artificial intelligence systems used in products subject to the EU General Product Safety Directive. These include toys, automobiles, medical devices and elevators.

  • Artificial intelligence systems that fall into eight specific areas, which will have to be registered in an EU database:

    (i) biometric identification and categorization of natural persons;
    (ii) management and operation of critical infrastructure;
    (iii) education and vocational training;
    (iv) employment, worker management and access to self-employment;
    (v) access to and use of essential private and public services and benefits;
    (vi) law enforcement;
    (vii) migration management, asylum, and border control;
    (viii) assistance in legal interpretation and enforcement of the law.

All high-risk artificial intelligence systems will be evaluated before being put on the market and throughout their life cycle.

The Generative and Basic AI systems/models can both be considered general-purpose AI because they are capable of performing different tasks and are not limited to a single task. The distinction between the two lies in the final output.

Generative AI, like the now-popular ChatGPT, uses neural networks to generate new text, images, videos or sounds that have never been seen or heard before, much as a human can. For this reason, the European Parliament has introduced higher transparency requirements:

  • companies developing generative AI will have to make sure that it is made explicit in the end result that the content was generated by the AI. This will, for example, make it possible to distinguish deep fakes from real images;

  • companies will have to ensure safeguards against the generation of illegal content; and

  • companies will have to make public detailed summaries of the copyrighted data used to train the algorithm.

Basic AI models, in contrast, do not ‘create,’ but learn from large amounts of data, use it to perform a wide range of tasks, and have application in a variety of domains. Providers of these models will need to assess and mitigate the possible risks associated with them (to health, safety, fundamental rights, the environment, democracy, and the rule of law) and register their models in the EU database before they are released to the market.

Next are the minimal or low risk AI applications, such as those used to date for translation, image recognition, or weather forecasting. Limited-risk artificial intelligence systems should meet minimum transparency requirements that enable users to make informed decisions. After interacting with applications, users can decide whether they wish to continue using them. Users should be informed when interacting with AI. This includes artificial intelligence systems that generate or manipulate image, audio, or video content (e.g., deepfakes).

Finally, exemptions are provided for research activities and AI components provided under open-source licenses.

The European Union and the United States Aiming to Bridge the AI Legislative Gap

The United States is expected to closely follow Europe in developing its own legislation. In recent times, there has been a shift in focus from a “light touch” approach to AI regulation, towards emphasizing ethics and accountability in AI systems. This change is accompanied by increased investment in research and development to ensure the safe and ethical usage of AI technology. The Algorithm Accountability Act, which aims to enhance transparency and accountability of providers, is still in the proposal stage.

During the recent US-EU ministerial meeting of the Trade and Technology Council, the participants expressed a mutual intention to bridge the potential legislative gap on AI between Europe and the United States. These objectives gain significance given the final passage of the European AI Act. To achieve this goal, a voluntary code of conduct on AI is under development, and once completed, it will be presented as a joint transatlantic proposal to G7 leaders, encouraging companies to adopt it.

The United Kingdom’s ‘Pro-Innovation’ Approach in Regulating AI

On March 29, 2023, the UK government released a white paper outlining its approach to regulating artificial intelligence. The proposal aims to strike a balance between fostering a “pro-innovation” business environment and ensuring the development of trustworthy AI that addresses risks to individuals and society.

The regulatory framework is based on five core principles:

  • Safety, security, and robustness: AI systems should function securely and safely throughout their lifecycle, with continuous identification, assessment, and management of risks.

  • Appropriate transparency and explainability: AI systems should be transparent and explainable to enable understanding and interpretation of their outputs.

  • Fairness: AI systems should not undermine legal rights, discriminate, or create unfair market outcomes.

  • Accountability and governance: Effective oversight and clear lines of accountability should be established across the AI lifecycle.

  • Contestability and redress: Users and affected parties should have the ability to contest harmful AI decisions or outcomes.

These principles are initially intended to be non-statutory, meaning no new legislation will be introduced in the United Kingdom for now. Instead, existing sector-specific regulators like the ICO, FCA, CMA, and MHRA will be required to create their own guidelines for implementing these principles within their domains.

The principles and sector-specific guidance will be supplemented by voluntary “AI assurance” standards and toolkits to aid in the responsible adoption of AI.

Contrasting with the EU AI Act, the UK’s approach is more flexible and perhaps ‘more proportionate’, relying on regulators in specific sectors to develop compliance approaches with central high-level objectives that can evolve as technology and risks change.

The UK government intends to adopt this framework quickly across relevant sectors and domains. UK sector specific regulators have already received feedback on implementing the principles during a public consultation that ran until June 2023, and we anticipate further updates from each of them in the coming months.

The Difficult Balance between Regulation and Innovation

The ultimate goal of these legislative efforts is to find a delicate balance between the necessity to regulate the rapid development of technology, particularly regarding its impact on citizens’ lives, and the imperative not to stifle innovation, or burden smaller companies with overly strict laws.

Anticipating the level of success is challenging, if not impossible. Nevertheless, the scope for “soft law” such as setting up an ad hoc committee at a European level shows promise. “Ultratechnical” matters subject to rapid evolution require clear principles that stem from the value choices made by legislators. Moreover, such matters demand technical competence to understand what is being regulated at any given moment.

Organizations using AI across multiple jurisdictions will additionally face challenges in developing a consistent and sustainable global approach to AI governance and compliance due to the diverging regulatory standards. For instance, the UK approach may be seen as a baseline level of regulatory obligation with global relevance, while the EU approach may require higher compliance standards.

As exemplified by the recent Italian shutdown of ChatGPT (see ChatGPT: A GDPR-Ready Path Forward? we have witnessed firsthand the complexities involved. The Italian data protection authority assumed a prominent role and instead of contesting the suspension of the technology in court, the business chose to cooperate. As a result, the site was reopened to Italian users within approximately one month.

In line with Italy, various other data protection authorities are actively looking into ways to influence the development and design of AI systems. For instance the Spanish AEPD has implemented audit guidance for data processing involving AI systems, more detail here, while or the French CNIL has created a department dedicated to AI with open self-evaluation resources for AI businesses, more detail here. Additionally, the UK’s Information Commissioner’s Office (ICO) has developed an AI toolkit (available here) designed to provide practical support to organizations.

From Safety to Liability: The AI Act is Prodromic to an AI Specific Liability Regime

The EU AI Act is part of a three-pillar package proposed by the EU Commission to support AI in Europe. The other pillars include an amendment to the EU Product Liability Directive (PLD) and a new AI liability directive (AILD). While the AI Act focuses on safety and ex ante protection/ prevention re fundamental rights, the PLD and AILD address damages caused by AI systems. Non-compliance with the AI Act’s requirements could also trigger, based on the AI Act risk level of the AI system at issue, different forms and degrees of alleviation of the burden of proof under both the amended PLD, for the no-fault based product liability claims, the AILD, for any other (fault based) claim. The amended PLD and the AILD are less imminent than the AI Act: they have not yet been approved by the EU Parliament and, as directives, will require implementation at the national level. Yet the fact that they are coming is of immediate importance and use, as it gives businesses even more reason to follow and possibly cooperate and partake in the standard setting process currently in full swing.

Conclusion

Businesses using AI must navigate evolving regulatory frameworks and strike a balance between compliance and innovation. They should assess the potential impact of the regulatory framework on their operations and consider whether existing governance measures address the proposed principles. Prompt action is necessary, as regulators worldwide have already started publishing extensive guidance on AI regulation.

Monitoring these developments and assessing the use of AI is key for compliance and risk management. This approach is crucial not only for regulatory compliance but also to mitigate litigation risks with contractual parties and complaints from individuals. Collaboration with regulators, transparent communication, and global harmonization are vital for successful AI governance. Proactive adaptation is essential as regulations continue to develop.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins