On August 1, 2024, the EU Artificial Intelligence Act (AI Act) entered into force. The AI Act introduces a risk-based legal framework for AI systems that fall into four main buckets: (i) prohibited AI systems, (ii) high-risk AI systems, (iii) AI systems with transparency requirements, and (iv) general-purpose AI models.
The AI Act applies to companies that are located in the EU. In addition, the AI Act has an extraterritorial reach. It applies to so-called AI “providers,” which are companies that develop and place AI systems on the EU market (including general purpose AI models) or put AI systems into service in the EU under their own name or trademark, irrespective of the provider’s location. The AI Act further applies to any situation where the output of the AI system is used in the EU, regardless of where the provider or deployer of the AI system is located.
The AI Act’s obligations will become applicable in phases. The provisions with respect to prohibited AI systems and AI literacy (see below) will become applicable on February 2, 2025. Specific obligations for general-purpose AI models will become applicable on August 2, 2025. Most other obligations under the AI Act, including the rules applicable to high-risk AI systems and systems subject to specific transparency requirements, will become applicable on August 2, 2026. The remaining provisions will become applicable on August 2, 2027.
How the AI Act Applies to Medical Devices
The AI Act introduces obligations for high-risk AI systems that require preparation, implementation and ongoing oversight. Under Article 6(1) and Annex I of the AI Act, high-risk AI systems include AI systems that are intended to be used as a safety component of a product, or that are in itself a product, subject to the EU Medical Device Regulation (2017/745) or the EU In Vitro Diagnostic (IVD) Medical Device Regulation (2017/746).
Therefore, AI-enabled medical devices are considered high-risk AI systems under the AI Act. Companies that manufacture, place on the market or use AI-enabled medical devices in the EU must comply with the requirements of the AI Act regarding high-risk AI systems.
Key Obligations Regarding High-Risk AI Systems in the Medical Devices Context
Under the AI Act, providers of regulated medical devices with AI functionality will need to comply with certain obligations prior to placing such medical devices on the EU market or putting them into service under their own trademark, including:
- Technical Documentation: Providers must maintain comprehensive technical documentation of the AI-enabled functionality of the medical device (e.g., design specifications, performance testing results, system architecture).
- Transparency and Provision of Information to Deployers: Providers must ensure that customers/deployers that use their AI-enabled medical devices receive adequate information on how to operate the AI-enabled medical device safely, including detailed instructions on system use, intended purpose, and any limitations or potential risks associated with the AI system.
- Quality Management System: Providers must implement a quality management system (QMS) that ensures compliance with the AI Act. The QMS should be documented in clear policies, procedures and instructions.
- Incident Reporting: Providers must report certain serious incidents to the relevant Market Surveillance Authorities (MSA) within 15 days after becoming aware of such incident. A serious incident is an incident or malfunctioning of an AI system that directly or indirectly leads to the death of a person, or serious harm to a person’s health; a serious and irreversible disruption of the management or operation of critical infrastructure; the infringement of fundamental rights; or serious harm to property or the environment.
- Conformity Assessments: Providers must conduct comprehensive conformity assessments to ensure that their high-risk AI systems meet the necessary technical, legal, and safety standards of the AI Act before placing them on the market or putting them into service.
- Post-Market Monitoring Procedures: Providers must establish post-market monitoring procedures to track the performance of their high-risk AI systems after they have been placed on the market. This includes collecting and reviewing data on system performance, identifying emerging risks, and updating the AI models as needed.
- AI Literacy: Effective as of February 2025, providers of high-risk AI systems must ensure that staff members and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy, tailored to their technical knowledge, experience, education, and the context in which the AI system is used. Robust, up-to-date AI training programs must be implemented to meet this requirement. This is a general requirement applicable to any AI system and is not specific to high-risk AI systems.
If an organization deploys AI-enabled medical devices procured from a provider, such as a health care facility that procures AI-enabled medical devices from a vendor, the deployer will have specific obligations under the AI Act. For example, the deployer must take steps to ensure human oversight of the use of AI-enabled medical devices. In addition, the deployer must inform individuals, such as patients, about the fact that the medical device has AI functionality, ensure that the data the device collects about patients to train the AI system is relevant and sufficiently representative, and monitor the operation of the AI system based on the provider’s instructions. The deployer must inform the provider and relevant authorities if it identifies any risks in the AI system. The deployer must further conduct a data protection impact assessment if the AI system (AI-enabled medical device) processes personal data such as patient information.
Non-compliance with the AI Act can result in complaints, investigations, fines, litigation, operational restrictions, and damage to a company’s reputation. The GDPR continues to apply where AI systems process personal data.
Conclusion
Organizations that develop and use AI-enabled medical devices should take proactive steps to review their AI practices in light of the new requirements under the AI Act. This is required both to comply with the new law and to build trust with healthcare providers and patients regarding the use of AI functionality in medical devices. They should implement the necessary compliance measures tailored to their role and responsibilities.