On 6 June 2025, the European Commission launched a targeted stakeholder consultation on the classification and regulation of high-risk artificial intelligence (AI) systems under the EU Artificial Intelligence Act (AI Act). The consultation is a critical opportunity for stakeholders in the life sciences and health care sectors to shape the forthcoming Commission guidelines on high-risk AI systems.
Regulatory Framework for High-Risk AI Systems
The AI Act, adopted in 2024, establishes a harmonized legal framework for the development, placing on the market, and use of AI in the EU. It adopts a risk-based approach, classifying AI systems into four categories: unacceptable risk (prohibited), high risk, limited risk, and minimal risk.
Of particular importance to the life sciences and health care sectors are high-risk AI systems, which are subject to an extensive regulatory compliance mechanism that establishes legal requirements as to risk management, data and data governance, technical documentation, record keeping, transparency, human oversight and accuracy, robustness, and cybersecurity.
Article 6 of the AI Act defines two categories of high-risk AI systems:
- Embedded AI systems: AI systems that are safety components of products, or are themselves products, governed by EU harmonized legislation such as the Medical Device Regulation (MDR) or the In-Vitro Diagnostic Regulation (IVDR). These systems are deemed high-risk by virtue of their integration into regulated products (e.g., medtech offerings such as medical imaging, surgical robots, wearables, and diagnostic software). Art. 6(1), Annex I.
- Standalone systems: AI systems that, based on their intended purpose, pose significant risks to health, safety, or fundamental rights in specific use cases listed in Annex III (e.g., AI used in health care for patient triage). Art. 6(2), Annex III.
The Role of the Commission’s Guidelines
While the AI Act provides the legal framework, the Commission’s implementing guidelines will be essential in clarifying how the high-risk regime is to be interpreted and applied in practice. This is particularly important for regulatory grey areas, such as AI used in medical devices, digital therapeutics, clinical research, or health care administration, where the risk profile may not be immediately evident. While not binding for courts, these guidelines will be critical for life sciences and health care companies in determining whether AI systems fall within the high-risk category and what compliance obligations apply.
Key Aspects of the Consultation Relevant to Life Sciences and Health Care Businesses
The consultation addresses several issues that will be highly relevant for stakeholders in the life sciences and health care sectors:
- Clarification of High-Risk Classification Rules: The consultation seeks input on the classification mechanism set forth in Art. 6. Stakeholders are invited to provide practical examples of AI applications and their potential impact on health and safety, which will help refine the scope of high-risk classifications.
- Requirements and Obligations for High-Risk Systems: The consultation explores how the mandatory requirements for high-risk AI systems—such as risk management, data governance, transparency, human oversight, and robustness—should be interpreted in practice. For health care companies, this includes ensuring that AI systems are trained on representative and high-quality datasets, and that outputs are explainable and auditable.
- Value Chain Responsibilities: A key focus is the allocation of responsibilities among different actors in the AI value chain, including developers, deployers, importers, and distributors. The consultation seeks views on how these roles should be defined and what obligations each party should be subject to, particularly in complex ecosystems where AI components are developed and integrated by different entities.
- Practical Implementation Challenges: The Commission is also gathering feedback on practical challenges companies face in implementing the AI Act, including overlaps with existing sectoral regulations (e.g., the Medical Device Regulation), the burden of conformity assessments, and the availability of notified bodies.
Benefits of Policy Engagement
The Commission’s consultation represents a critical opportunity for life sciences and health care companies to shape the implementation of the AI Act. By contributing practical insights and highlighting sector-specific challenges, stakeholders can help ensure that the forthcoming guidelines are both effective and proportionate. Companies using or developing AI systems in health care should assess their portfolios in light of the AI Act’s high-risk classification criteria and consider submitting feedback before the 18 July 2025 deadline.
Stakeholders can access the consultation via the EU survey portal.