HB Ad Slot
HB Mobile Ad Slot
AI Drug Development: FDA Releases Draft Guidance
Wednesday, January 15, 2025

On January 6, 2025, the U.S. Food and Drug Administration (FDA) released draft guidance titled Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products (“guidance”) explaining the types of information that the agency may seek during drug evaluation. In particular, the guidance outlines a risk framework based on a “context of use” of Artificial Intelligence (AI) technology and details the information that might be requested (or required) relating to AI technologies, the data used to train the technologies, and governance around the technologies, in order to approve their use. At a high level, the guidance underscores the FDA’s goals for establishing AI model credibility within the context of use.

This article provides an overview of the guidance, including example contexts of use and detailing the risk framework, while explaining how these relate to establishing AI model credibility through the suggested data and model-related disclosures. It further details legal strategy considerations, along with opportunities for innovation, that arise from the guidance. These considerations will be valuable to sponsors (i.e., of clinical investigations, such as Investigational New Drug Exemption applications), along with AI model developers and other firms in the drug development landscape.

Defining the Question of Interest

The first step in the guidance’s framework is defining the “question of interest:” the specific question, decision, or concern being addressed by the AI model. For example, questions of interest could involve the use of AI technology in human clinical trials, such as inclusion and exclusion criteria for the selection of participants, risk classification of participants, or determining procedures relating to clinical outcome measures of interest. Questions of interest could also relate to the use of AI technology in drug manufacturing processes, such as for quality control.

Contexts of Use

The guidance next establishes contexts of use – the specific scope and role of an AI model for addressing the question of interest – as a starting point for understanding any risks associated with the AI model, and in turn how credibility might be established.

The guidance emphasizes that it is limited to AI models (including for drug discovery) that impact patient safety, drug quality, or reliability of results from nonclinical or clinical studies. As such, firms that use AI models for discovering drugs but rely on more traditional processes to address factors that the FDA considers for approving a drug such as safety, quality, and stability, should be aware of the underlying principles of the guidance but might not need to modify their current AI governance. An important factor in defining the contexts of use is how much of a role the AI model plays relative to other automated or human-supervised processes; for example, processes in which a person is provided AI outputs for verification will be different from those that are designed to be fully automated.

Several types of contexts of use are introduced in the guidance, including:

  1. Clinical trial design and management
  2. Evaluating patients
  3. Adjudicating endpoints
  4. Analyzing clinical trial data
  5. Digital health technologies for drug development
  6. Pharmacovigilance
  7. Pharmaceutical manufacturingGenerating real-world evidence (RWE)
  8. Life cycle maintenance

Risk Framework for Determining Information Disclosure Degree

The guidance proposes that the risk level posed by the AI model dictates the extent and depth of information that must be disclosed about the AI model. The risk is determined based on two factors: 1) how much the AI model will influence decision-making (model influence risk), and 2) the consequences of the decision, such as patient safety risks (decision consequence risk).

For high-risk AI models—where outputs could impact patient safety or drug quality—comprehensive details regarding the AI model’s architecture, data sources, training methodologies, validation processes, and performance metrics may have to be submitted for FDA evaluation. Conversely, the required disclosure may be less detailed for AI models posing low risk. This tiered approach promotes credibility and avoids unnecessary disclosure burdens for lower-risk scenarios.

However, most AI models within the scope of this guidance will likely be considered high risk because they are being used for clinical trial management or drug manufacturing, so stakeholders should be prepared to disclose extensive information about an AI model used to support decision-making. Sponsors that use traditional (non-AI) methods to develop their drug products are required to submit complete nonclinical, clinical, and chemistry manufacturing and controls to support FDA review and ultimate approval of a New Drug Application. Those sponsors using AI models are required to submit the identical information, but in addition, are required to provide information on the AI model as outlined below.

High-Level Overview of Guidelines for Compliance Depending on Context of Use

The guidance further provides a detailed outline of steps to pursue in order to establish credibility of an AI model, given its context of use. The steps include describing: (1) the model, (2) the data used to develop the model, (3) model training, (4) and model evaluation, including test data, performance metrics, and reliability concerns such as bias, quality assurance, and code error management. Sponsors may be expected to be more detailed in disclosures as the risks associated with these steps increase, particularly where the impact on trial participants and/or patients increase.

In addition, the FDA specifically emphasizes special consideration for life cycle maintenance of the credibility of AI model outputs. For example, as the inputs to or deployment of a given AI model changes, there may be a need to reevaluate the model’s performance (and thus provide corresponding disclosures to support continued credibility).

Intellectual Property Considerations

Patent vs. Trade Secret

Stakeholders should carefully consider patenting the innovations underlying AI models used for decision-making. The FDA’s extensive requirements for transparency and submitting information about AI model architectures, training data, evaluation processes, and life cycle maintenance plans would pose a significant challenge for maintaining these innovations as trade secrets.

That said, trade secret protection of at least some aspects of AI models is an option when the AI model does not have to be disclosed. If the AI model is used for drug discovery or operations that do not impact patient safety or drug quality, it may be possible to keep the AI model or its training data secret. However, AI models used for decision-making will be subject to the FDA’s need for transparency and information disclosure that will likely jeopardize trade secret protection. By securing patent protection on the AI models, stakeholders can safeguard their intellectual property while satisfying FDA’s transparency requirements.

Opportunities for Innovation

The guidance requires rigorous risk assessments, data fitness standards, and model validation processes, which will set the stage for the creation of tools and systems to meet these demands. As noted above, innovative approaches for managing and validating AI models used for decision-making are not good candidates for trade secret protection, and stakeholders should ensure early identification and patenting of these inventions.

We have identified specific opportunities for AI innovation that are likely to be driven by FDA demands reflected in the guidance:

  • Requirements for transparency
    1. Designing AI models with explainable AI capabilities that demonstrate how decisions or predictions are made
    2. Bias and fitness of data
      1. Systems for detecting bias in training data
      2. Systems for correcting bias in training data
    3. Systems for monitoring life cycle maintenance
      1. Systems to detect data drift or changes in the AI model during life cycle of the drug
      2. Systems to retrain or revalidate the AI model as needed because of data drift
      3. Automated systems for tracking model performance
    4. Testing methods
      1. Developing models that can be tested against independent data sets and conditions to demonstrate generalizability
    5. Integration of AI models in a practical workflow
      1. Good Manufacturing Practices
      2. Clinical decision support systems
    • Documentation systems
      1. Automatic systems to generate reports of model development, evaluation, updates, and credibility assessments that can be submitted to FDA to meet regulatory requirements

The guidance provides numerous opportunities for innovations to enhance AI credibility, transparency, and regulatory compliance across the drug product life cycle. As demonstrated above, the challenges that the FDA seeks to address in order to validate AI use in drug development clearly map to potential innovations. Such innovations are likely valuable since they are needed to comply with FDA guidelines and offer significant opportunities for developing a competitive patent portfolio.

Conclusion

With this guidance, the FDA has proposed guidelines for establishing credibility in AI models that have risks for and impacts on clinical trial participants and patients. This guidance, while in draft, non-binding form, follows a step-by-step framework from defining the question of interest and establishing the context of use of the AI model to evaluating risks and in turn establishing the scope of disclosure that may be relevant. The guidance sets out the FDA’s most current thinking about the use of AI in drug development. Given such a framework and the corresponding level of disclosure that can be expected, sponsors may consider a shift in strategy towards using more patent protection for their innovations. Similarly, there may be more opportunities for identifying and protecting innovations associated with building governance around these models.

In addition to using IP protection as a backstop to greater disclosure, firms can also consider introducing more operational controls to mitigate the risks associated with AI model use and thus reduce their disclosure burden. For example, firms may consider supporting AI model credibility with other evidence sources, as well as integrating greater human engagement and oversight into their processes.

In meantime, sponsors that are uncertain about how their AI model usage might interact with future FDA requirements should consider the engagement options that the FDA has outlined for their specific context of use.

Comments on the draft guidance can be submitted online or mailed before April 7, 2025, and our team is available to assist interested stakeholders with drafting.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins