HB Ad Slot
HB Mobile Ad Slot
OHRP Workshop Highlights Artificial Intelligence Uses, Concerns in Human Research
Wednesday, October 9, 2024

The Department of Health and Human Services (HHS) Office for Human Research Protections (OHRP) recently held its 2024 Exploratory Workshop titled “The Evolving Landscape of Human Research with AI – Putting Ethics to Practice” (the Workshop). The Workshop broadly covered current uses of artificial intelligence (AI) across human subject research as well as the various legal and ethical considerations for these uses. Although the individual presentations and panel discussions throughout the Workshop covered a range of topics and raised a number of interesting questions and hypotheticals, the panelists did not draw any specific conclusions or reach any kind of consensus about next steps to address the critical issues. Even so, the panelists provided some crucial insights that companies and regulators must grapple with in the context of expanding use cases for AI in human research and creating rules governing such uses. We have put together some key takeaways from the Workshop below:

AI Use Cases in Research Are Evolving

One of the key themes in the panelists’ discussion was that applications for AI in human research are advancing rapidly, which means that ethical considerations and governance controls for new use cases often significantly lags behind implementation. Workshop attendees were likely most familiar with certain uses of AI to facilitate some laborious research tasks, such as site feasibility studies, contracting and budgeting, or protocol development. However, one panelist Craig Lipset, Co-Chair of the Decentralized Trials & Research Alliance, reminded the audience that current use cases go well beyond these well-known examples and extend to uses like automated study building in which a “digital” protocol could be considered a dynamic data source and the seed for downstream research task automation, such as formulating the methods for analyzing the study data (i.e., statistical analysis plans) or all the way through directing study conduct and generating study report submissions. Other examples discussed by the panelists included automated chart review, quality oversight and signal detection, automation of quality reviews, and use of “digital twins” of patients. Each of these use cases involves overlapping but distinct legal and ethical issues.

In particular, the use of digital twins is a complex application of AI in the context of human research and ethical considerations. The Food & Drug Administration (FDA) defined digital twins in a discussion paper on use of AI/ML in the development of drug and biological products as “an integrated multi-physics, multiscale, probabilistic simulation of a complex system that uses the best available data, sensors, and models to mirror the behavior of its corresponding twin.” In creating digital twins of patients, organizations can develop representations or replicas of individuals that can dynamically reflect molecular and physiological status over time, ultimately creating a record describing what may happen to an individual if they, for example, had received a placebo instead of an investigational medicine. The use of digital twins, in particular, also poses some interesting questions on consents that we will address further below.

This presentation served as a reminder that organizations should consider AI legal and ethical issues from an individual use case perspective rather than one-size-fits-all policies and regulations. While organizations have been using AI to create administrative efficiencies in research studies for years, these uses now include more material aspects of study creation and conduct. The broader use of AI to facilitate and drive such aspects of research studies requires organizations to adjust how they assess the various challenges with these use cases and implement appropriate design and governance controls.

Does Data De-Identification Matter Anymore?

Multiple presenters noted that the principles of autonomy and self-determination embedded within the Common Rule, Belmont Report, and Declaration of Helsinki make privacy and confidentiality a primary concern when using AI as part of human subject research. AI model development in research, in particular, presents serious privacy, confidentiality, and transparency challenges.  Commenting on this, Benjamin C. Silverman, M.D., Senior IRB Chair in the Human Research Affairs department at Mass General Brigham, said that using AI for essentially any large-scale project requires the use of massive datasets, often with patient-specific data, to build, train, and validate AI models and also the sharing and combining of those datasets. 

As a general matter, organizations, IRBs, and regulations have long leaned on the concept of data de-identification when allowing for the ethical use and sharing of individual data. As we discussed in our Health Care Privacy and Security in 2024: Six Critical Topics to Watch post, however, it’s easier than ever to combine individual de-identified datasets with other datasets. Combining the need for high volumes of data with technological advancements in data collection, data storage and access, and data use further complicates things from a privacy perspective. Neural networks are particularly susceptible to memorizing training data and enhanced pattern recognition, faster and more accurate matching. AI’s ability to make connections between separate databases also make reliance on data de-identification more difficult as a true privacy protection mechanism. These issues are particularly relevant to human subject research because it will be much harder to draw the line as to what constitutes identifiable versus non-identifiable data and as a result, an IRB may no longer be able to clearly distinguish what is identifiable information and therefore determine whether an activity meets the definition of human subject research under the Common Rule.

New Technologies Pose Novel Consent Questions

Regarding the use of digital twins, Lipset posed the question of what happens if an individual’s digital twin is invited to participate in a research study without that individual’s explicit consent. Would standard consent requirements under the Common Rule and HIPAA apply? A 2023 Government Accountability Office (GAO) report discussed how digital twins may help to facilitate predictive and personalized medicine to improve patient outcomes and reduce some health care costs. The same report, however, referenced the possible ethical and privacy concerns that could result in low public trust, citing a hypothetical example of a pharmaceutical company selling health-related data from digital twins without consent. 

With the understanding that the Common Rule and HIPAA do not have specific requirements for informed consent that are tailored to AI, Sara Gerke, Associate Professor of Law and Richard W. & Marie L. Corman Scholar at the University of Illinois Urbana-Champaign College of Law, discussed how consent to use of AI could apply to clinical trials. She noted that it would be useful for participants to be informed in understandable language what AI technology is being used and how it applies to the use of their personal data, the type of AI model being used, and whether the algorithm was trained on representative data.

Silverman noted that requiring consent, consistent with Common Rule principles around individual respect and autonomy, would ensure organizations are transparent as to the uses of identifiable data for AI purposes. However, in addition to the reality that it is impractical to receive informed consent for all identifiable data in these large datasets, such a requirement may result in less-diverse datasets. A recent Office of Inspector General (OIG) report on underrepresented groups in National Institutes of Health-funded clinical trials, for example, indicated that one barrier to broader inclusion of racial groups and ethnic groups was the costs associated with providing translated informed consent for patients with limited English proficiency. Another issue that often comes up regarding consent is whether the scope of the consent features use of AI and includes an ability to opt out. Under the General Data Protection Regulation in the European Union, for instance, there is a general prohibition on automated decision-making that can have a legal effect or significant on an individual that includes an exception for explicit consent. AI makes this exception more complicated because once an individual has consented to their data being used for AI model training purposes, the exercise of truly opting an individual out (i.e., removing any trace of the individual’s data from the algorithm and training datasets) is quite challenging.

While the issue of using consents related to AI and research continues to be a much-debated topic, in the meantime, increasing patient literacy on the uses of AI in human research can only build public trust and benefit the use of informed consents in the future.

Conclusion

The Workshop serves as a snapshot of the current tension among the often-competing priorities of innovation, practicality, and regulatory compliance when using AI when conducting human research. It is possible that the continued development of regulatory environment around AI in health care will help to resolve some of the human research issues raised by panelists. As we close in on the one-year anniversary of the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, federal agencies continue to move forward with their respective tasks and recently reported that they completed all of the 270-day actions in the Executive Order on-schedule. More recently, HHS created agency-wide roles of Chief Technology Officer, Chief Data Officer, and Chief AI Officer, while FDA created an Artificial Intelligence Council within the Center for Drug Evaluation and Research (CDER). And some states, such as Colorado, are moving forward with their own regulations to establish certain standards around the use of AI while regulations at the federal level are in process. Adding to the complexity are the aforementioned privacy laws at the federal level that were not developed to account for use of AI, as well as the current patchwork of state privacy laws. However, for now, many in the industry are grappling with these issues while the privacy laws exist and regulations remain in flux, so they are left to make judgment calls based on the existing environment and remain informed on new requirements at the federal and state levels. We will continue to monitor and provide updates around the use of AI in health care.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins