Recent discussions surrounding the EU Artificial Intelligence Act have accelerated as the European Parliament took the Act one step closer to becoming effective law.
On June 14, 2023, the Parliament voted for negotiations with the Council of the European Union to commence on what the final version of the Act will look like. Since that vote, Members of the European Parliament (MEPs) have highlighted that the key points of negotiation will likely focus on the categorization of "real-time" biometric identification and the regulation of generative artificial intelligence (AI) and foundation models. MEPs have also addressed their plan to keep international standards in mind when negotiating and drafting the final version of the Act, as the European Parliament seeks to maintain regulatory alignment with the United States.
Overview of EU AI Act
As discussed in our previous blog post, the EU Artificial Intelligence Act moved forward in the process of becoming enacted law when the European Parliament's Committee on Civil Liberties, Justice and Home Affairs and Committee on Internal Market and Consumer Protection voted to approve the Act on May 11, 2023. The Act then continued to progress when the European Parliament voted to adopt its negotiating position on the Act with 499 votes in favor, 28 against, and 93 abstentions. With this vote, the European Parliament has now begun discussions and negotiations with the Council of the European Union on what the final version of the law will look like with hopes of coming to an agreement by the end of 2023.
As previously outlined, the main goals of the Act are to establish a human-centric legal framework on AI and ensure transparent, safe, trustworthy, environmentally-friendly, and non-discriminatory practices in AI applications. The Act outlines four risk levels of AI applications with corresponding regulations associated with each risk level. Many are keeping an eye on the "unacceptable risk" and “high risk” levels as those applications will have much stricter requirements and regulations. The risk level classifications remain unchanged since the initial draft, but there has been a new use categorized under the “high risk” level: the use of AI systems for influencing election outcomes. Nonetheless, the Act remains subject to modifications as Parliament undergoes negotiations with the Council members.
Points of Negotiation Between the European Parliament and Council of the European Union
In a press conference after the negotiation approval, the president of the European Parliament and MEPs discussed the present state of the Act and potential points of negotiation with the Council. Although there was a last-minute divergence in Parliament over the risk-level categorization of “real-time” biometric identification, the MEPs emphasized maintaining it under the “unacceptable risk” level to be banned by default. However, discussions are still anticipated to occur regarding exceptions to this ban for time-sensitive law enforcement scenarios such as a child missing due to potential abduction or a terrorist attack. The predicted point of debate is whether judicial authorization should be required for the use of real-time biometric identification in these exceptional time-sensitive situations.
Further, as the draft of the Act currently stands, there is a separate regulation method for generative AI and foundation models outside of the risk categorization method. The MEPs explained that the intrinsic risk in generative AI platforms and foundation models goes beyond the context in which the platforms and models are being used and instead lies in how the models are built. Due to the large datasets that foundational models are exposed to, the European Parliament expressed concern about the illegality and harmful effects of the vast possibilities of content that could be created using those foundational models. This intrinsic risk was identified as a driving force in their current approach to generative AI and foundation model regulations in the Act draft. The draft’s approach currently focuses on consumer protection through greater transparency by requiring systems to disclose when content is AI generated and when images are deepfakes, implementing safeguards against illegal content generation, and requiring the publication of detailed summaries on any copyrighted data used for model training.
The EU AI Act Beyond the EU
As the first robust legal framework on AI, the EU AI Act has the capability to set an international standard and the European parliamentarians have been keeping this in mind during their drafting approach. There has been a great deal of discussion over the Act’s definition of AI, but the Act has settled on a definition that aligns with the definitions of both the Organization for Economic Co-operation and Development (OECD) as well as the National Institute of Standards and Technology (NIST) of the United States. The purpose of this approach is to achieve AI alignment at an international level and provide companies developing and using AI with more conformity across markets. While the Act prioritizes consumer rights and privacy, it also seeks to equally promote innovation in the field of AI. To do so, MEPs discussed their approach of having those developing AI, from start-ups to large companies, involved in setting technical standards. In this process of negotiation and continued drafting, EU lawmakers acknowledge the importance of collaboration with other like-minded democracies to obtain global convergence on regulations.
Regarding regulations beyond the EU, the underlying human-centric principles of the EU AI Act are gaining traction in the United States as the National AI Commission Act was introduced on June 20, 2023. This Act, a bipartisan and bicameral legislation, proposes the formation of a national commission to review the current approach to AI regulation, make recommendations, and develop a risk-based framework for AI regulation. Although the National AI Commission Act is in the very early stages, it has a great deal of support with the growing demand for legislation.
In further recent developments, U.S. Senator Michael Bennet introduced an updated bill to create a Federal Digital Platform Commission to regulate AI. This legislation seeks to expand the definition of a digital platform to include companies that offer “content primarily generated by algorithmic processes” and to expand the definition of algorithmic processes to include “the use of personal data to generate content or to make a decision”. The bill also calls for algorithmic audits and public risk assessments for significant platforms.
Meanwhile, U.S. Senate Majority Leader Chuck Schumer unveiled the SAFE Innovation Framework on June 21, 2023, outlining four pillars for future bipartisan collaboration on AI legislation: security, accountability, protecting our foundations, and explainability. Senator Schumer is encouraging the committee to work with both Republicans as well as AI experts to start the regulatory proposal drafting process.
As AI continues to evolve rapidly, it's crucial to keep an eye on these developments and their potential impact on the technology industry and society at large. The EU AI Act and the U.S. initiatives are significant steps towards establishing a regulatory framework for AI, setting the stage for future discussions and actions in this space.