HB Ad Slot
HB Mobile Ad Slot
EU AI Act Proposal and Regulation of Financial Services
Monday, July 17, 2023

The EU is at the forefront of the commitment to regulate Artificial Intelligence (AI) technology to ensure better conditions for the development and use of this innovative technology. In 2020, the European Commission published a white paper on AI and stated therein the urgency to address the challenges of complexity, unpredictability, and autonomous behaviour of certain AI systems. Consequently, in April 2021, the European Commission proposed the first EU regulatory framework for AI – the Artificial Intelligence Act (AI Act). On 14 June 2023, the European Parliament approved its negotiating position on the proposed AI Act. The European Parliament adopted its negotiating position with 499 votes in favour, 28 against and 93 abstentions.

The proposed AI Act will apply to providers marketing or operating AI systems on EU territory, irrespective of whether they are established in the EU or in a third country, and to AI users located in the EU, as well as to providers and users located in a third country when the information generated is used on EU territory. It introduces a definition of AI that aims to reflect the rapid technological change, defining it as a “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.

The EU Parliament has amended the EU Commission’s proposal very substantially. Key changes include:

  • The definition of “AI system” is aligned with the Organisation for Economic Co-operation and Development’s (OECD) definition of AI system. An AI system is now defined as a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”

  • The Parliament’s text further contains several new definitions, including:

    • “Risk” means the combination of the probability of an occurrence of harm and the severity of that harm.

    • “Significant risk” means a risk that is significant because of the combination of its severity, intensity, probability of occurrence and duration of its effects, and its ability to affect an individual, a plurality of persons or a particular group of persons.

    • “Foundation model” means an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.

    • “General purpose AI system” means an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.

  • Further, the EU Parliament’s Position establishes a set of general core principles that are applicable to all AI systems regulated by the AI Act. These principles are:

Human agency and oversight –AI shall be a tool that serves people, respects human dignity and personal autonomy, and that functions in a way that can be appropriately controlled and overseen by humans.

Technical robustness and safety – AI systems shall be developed and used in a way to minimise unintended and unexpected harm, as well as being robust in case of unintended problems and being resilient against attempts to alter the use or performance of the AI system to allow unlawful use by malicious third parties.

Privacy and data governance – AI systems shall be developed and used in compliance with existing privacy and data protection rules, while processing data that meets high standards in terms of quality and integrity.

Transparency – AI systems shall be developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they are communicating or interacting with an AI system, as well as duly informing users of the capabilities and limitations of that AI system and affected persons about their rights.

Diversity, non-discrimination and fairness – AI systems shall be developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by EU or national law.

Social and environmental wellbeing – AI systems shall be developed and used in a sustainable and environmentally friendly manner as well as in a way to benefit all human beings, while monitoring and assessing the long-term impacts on the individual, society and democracy.

  • Prohibited AI systems – Under the Commission proposal, unacceptable risk AI systems are systems considered a threat to people and should be banned. They include:

    • Cognitive behavioural manipulation of people or specific vulnerable groups – for example, voice-activated toys that encourage dangerous behaviour in children.

    • Social scoring – classifying people based on behaviour, socioeconomic status or personal characteristics.

    • Real-time and remote biometric identification systems, such as facial recognition.

The European Parliament proposes significant amendments to the list of prohibited AI systems. New prohibitions include:

  • Biometric categorisation systems that categorise natural persons according to sensitive or protected attributes or characteristics or based on the inference of those attributes or characteristics.

  • AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

  • High-risk AI systems – AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories based on the Commission proposal:

    • AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices, and lifts.

    • AI systems as high risk are:

      • Critical infrastructures (e.g. transport) that could put the life and health of citizens at risk.

      • Educational or vocational training that may determine the access to education and professional course of someone’s life (e.g. scoring of exams).

      • Safety components of products (e.g. AI application in robot-assisted surgery).

      • Employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures).

      • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan).

      • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence).

      • Migration, asylum, and border control management (e.g., verification of authenticity of travel documents).

      • Administration of justice and democratic processes (e.g., applying the law to a concrete set of facts).

The European Parliament proposes to expand the list in Annex III, classifying the following AI systems (among others) as high risk:

  • Biometrics AI – “AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those mentioned in Article 5, Point 1 shall not include AI systems intended to be used for biometric verification whose sole purpose is to confirm that a specific natural person is the person he or she claims to be.”

  • Education AI – “AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of/within education and vocational training institutions”.

  • Law enforcement AI – “AI systems intended to be used by or on behalf of law enforcement authorities or by Union agencies, offices or bodies in support of law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences or, in the case of Union agencies, offices or bodies, as referred to in Article 3(5) of Regulation (EU) 2018/1725”.

  • Subliminal AI – “AI systems intended to be used by social media platforms that have been designated as very large online platforms within the meaning of Article 33 of Regulation EU 2022/2065, in their recommender systems to recommend to the recipient of the service user-generated content available on the platform”.

Under the rules proposed by the European Parliament, providers of certain AI systems may review the presumption that the system should be considered a high-risk AI system:

“In addition to the high-risk AI systems referred to in paragraph 1, AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk if they pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. Where an AI system falls under Annex III point 2, it shall be considered to be high-risk if it poses a significant risk of harm to the environment.”

If applicable, this would require that a notification be submitted to a supervisory authority or the AI Office, which shall review and reply, within three months, to clarify whether they deem the AI system to be high risk.

“Where providers falling under one or more of the critical areas and use cases referred to in Annex III consider that their AI system does not pose a significant risk as described in paragraph 2, they shall submit a reasoned notification to the national supervisory authority that they are not subject to the requirements of Title III Chapter 2 of this Regulation. Where the AI system is intended to be used in two or more Member States, that notification shall be addressed to the AI Office. Without prejudice to Article 65, the national supervisory authority shall review and reply to the notification, directly or via the AI Office, within three months if they deem the AI system to be misclassified.”

High-risk AI systems will be subject to strict obligations before they can be put on the market:

  • Adequate risk assessment and mitigation systems.

  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes.

  • Logging of activity to ensure traceability of results.

  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance.

  • Clear and adequate information to the user.

  • Appropriate human oversight measures to minimise risk.

  • High level of robustness, security and accuracy.

All remote biometric identification systems are considered high risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle, prohibited.

  • Generative AI Systems –The European Parliament position further imposes specific requirements on generative AI systems, like ChatGPT, which would have to comply with transparency requirements under the European Parliament comments:

    • Disclosing that the content was generated by AI.

    • Designing the model to prevent it from generating illegal content.

    • Publishing summaries of copyrighted data used for training.

  • Limited risk AI systems – Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio, or video content – for example, deepfakes.

Adjustments to the Obligations in the Context of High-risk AI Systems

The European Parliament also introduces significant changes to the obligations on providers of high-risk AI systems by, for example, requiring them to:

  • Ensure that natural persons responsible for human oversight of high-risk AI systems are specifically made aware of the risk of automation or confirmation bias.

  • Provide specifications for input data, or any other relevant information in terms of the datasets used, including their limitation and assumptions, considering the intended purpose and the foreseeable and reasonably foreseeable misuse of the AI system.

  • Ensure that the high-risk AI system complies with accessibility requirements.

In addition, the European Parliament amended the obligations for users/deployers of high-risk AI systems. Among other things, the European Parliament proposed the following amendments:

  • Requiring users, prior to putting into service or use a high-risk AI system in an employment setting, to “consult workers representatives, inform the affected employees that they will be subject to the system, and obtain their consent”.

  • Requiring users of high-risk AI systems that make decisions or assist in making decisions related to natural persons to “inform the natural persons that they are subject to the use of the high-risk AI system”.

  • Requiring users of high-risk AI systems to conduct a fundamental rights impact assessment:

“(58a) Whilst risks related to AI systems can result from the way such systems are designed, risks can as well stem from how such AI systems are used. Deployers of high-risk AI systems therefore play a critical role in ensuring that fundamental rights are protected, complementing the obligations of the provider when developing the AI system. Deployers are best placed to understand how the high-risk AI system will be used concretely and can therefore identify potential significant risks that were not foreseen in the development phase, due to a more precise knowledge of the context of use, the people or groups of people likely to be affected, including marginalised and vulnerable groups. Deployers should identify appropriate governance structures in that specific context of use, such as arrangements for human oversight, complaint-handling procedures and redress procedures, because choices in the governance structures can be instrumental in mitigating risks to fundamental rights in concrete use-cases. In order to efficiently ensure that fundamental rights are protected, the deployer of high-risk AI systems should therefore carry out a fundamental rights impact assessment prior to putting it into use. The impact assessment should be accompanied by a detailed plan describing the measures or tools that will help mitigate the risks to fundamental rights identified at the latest from the time of putting it into use. If such plan cannot be identified, the deployer should refrain from putting the system into use. When performing this impact assessment, the deployers should notify the national supervisory authority and, to the best extent possible, relevant stakeholders, as well as representatives of groups of persons likely to be affected by the AI system in order to collect relevant information which is deemed necessary to perform the impact assessment and are encouraged to make the summary of their fundamental rights impact assessment publicly available on their online website”.

Fines

The European Parliament’s position substantially amends the fines that can be imposed under the AI Act. The European Parliament proposes that: 

“The following infringements shall be subject to administrative fines of up to €30 million or, if the offender is a company, up to 6% of its total worldwide annual turnover for the preceding financial year, whichever is higher:

  • Non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5

  • Non-compliance of the AI system with the requirements laid down in Article 10.

The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to €20 million or, if the offender is a company, up to 4% of its total worldwide annual turnover for the preceding financial year, whichever is higher.

The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to €10 million or, if the offender is a company, up to 2% of its total worldwide annual turnover for the preceding financial year, whichever is higher.”

The AI Act and Financial Services

The financial services sector occupies a grey area in the AI Act’s list of risky industries. This is something that should be clarified in the further trialogue discussions between the Council, the Parliament and the Commission. In particular:

  • Finance is not included among the high-risk systems in Annexes II and III.

  • “Credit institutions,” or banks, are referenced in various sections.

    • “Recital (80) Union law on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services law, the competent authorities responsible for the supervision and enforcement of the financial services law, including, where applicable, the European Central Bank, should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council, it is also appropriate to integrate the conformity assessment procedure and some of the providers’ procedural obligations in relation to risk management, post-marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on deployers of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU.”

    • Further requirements can be found in Article 9 – “For providers and AI systems already covered by Union law that require them to establish a specific risk management, including credit institutions regulated by Directive 2013/36/EU, the aspects described in paragraphs 1 to 8 shall be part of or combined with the risk management procedures established by that Union law.”

      Therefore, the AI Act sets out more general regulatory compliance requirements that are also applicable to financial services, among them:

  • Ongoing risk-management processes.

  • Data and data governance requirements.

  • Technical documentation and record-keeping.

  • Transparency and provision of information to users.

  • Knowledge and competence.

  • Accuracy, robustness, and cybersecurity.

Such requirements need to be combined with the existing requirements under EU and national law final regulatory law.

Next Steps

Following the Parliament’s vote, trialogue discussions between the Council, the Parliament and the Commission have now commenced. It is difficult to predict how long the process will take before a political agreement on the AI Act can be reached, but, conceivably, it could be before the end of 2023 or early in 2024. The Parliament and the Commission both propose a transition period of two years, following the entry into force of the AI Act, before most of its provisions would apply (and the Council has proposed to extend this to three years). That said, the idea of an earlier implementation timeframe, reduced by six months, was mentioned by the co-rapporteurs in the context of discussions around the plenary vote for some systems such as foundation models and generative AI.

Once enacted, the AI Act will be one piece of a broader regulatory landscape expected for AI, alongside laws such as the GDPR, the proposed AI Liability Directive and the proposed revision of the Product Liability Directive.

What Should Financial Institutions Be Doing?

  • Defining their AI and tech strategy and governance, building on and seeking to leverage existing control frameworks.

  • Mapping their existing and expected development and use of AI.

  • Monitoring the key AI regulatory, policy and market developments.

  • Assessing the impact of the AI Act and other key frameworks and laws around the world on their AI projects and plans.

Reactions

Some of the biggest companies in Europe have taken collective action to criticize the European Union’s recently approved AI Act, claiming that the AI Act is ineffective and could negatively impact competition. In an open letter sent to the European Parliament, Commission, and member states, over 150 executives slammed the AI Act for its potential to “jeopardise Europe’s competitiveness and technological sovereignty”.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins