HB Ad Slot
HB Mobile Ad Slot
FDA Convenes Medical Device Workshop Focused on Artificial Intelligence and Machine Learning Transparency
Tuesday, October 26, 2021

On October 14, 2021, the U.S. Food and Drug Administration (“FDA” or the “Agency”) held a virtual workshop entitled, Transparency of Artificial Intelligence (“AI”)/Machine Learning (“ML”)-enabled Medical Devices. The workshop builds upon previous Agency efforts in the AI/ML space.  

Back in 2019, FDA issued a discussion paper and request for feedback called, Proposed Regulatory Framework for Modifications to AI/ML-Based Software as a Medical Device (“SaMD”). To support continued framework development and to increase collaboration and innovation between key stakeholders and specialists, FDA created the Digital Health Center of Excellence in 2020. And, in January 2021, FDA published an AI/ML Action Plan, based, in part, on stakeholder feedback to the 2019 discussion paper. 

The October 2021 workshop sought to expand upon FDA’s current Action Plan. The workshop’s objectives included exploring what transparency means to manufacturers, providers, and patients in relation to AI/ML-enabled medical devices; why such transparency matters; and how transparency can be achieved. Generally, the workshop emphasized the role transparency plays in creating safer, more effective AI/ML-enabled devices and in establishing trust in patients and providers using or prescribing such products.  Below, we outline other key workshop themes.

Device Data and Product Development

The workshop highlighted how data and product development transparency is important in fostering trust in AI/ML-enabled medical devices. Some stakeholders, including patients, providers, and software developers, recommended transparency focused on the data sources that are utilized to train and create device software as well as the data used to validate software. The workshop included a discussion of the history of bias in the medical industry and reasons to include a diverse population in the data used to train and validate software. Stakeholders recommended that FDA and device manufacturers should provide information concerning underlying device data, including the demographics of the populations involved in testing and validation of the medical device.   

FDA also raised issues regarding software and technological development, specifically asking how much transparency is necessary to create trust between the patient/provider and the device manufacturer.  Regardless of whether the device software uses “static” training or “dynamic” training, some patients and providers indicated that they would like insight into the software source code and any modifications made to the source code. [1] While acknowledging transparency is important in fostering patient and provider trust, device manufacturers in attendance explained that such transparency can raise important issues related to the protection of proprietary commercial information. 

Product Education

At the workshop, certain providers and patients suggested that AI/ML-enabled medical devices include an educational program/materials to provide sufficient transparency.  These stakeholders recommended that potential users of AI/ML-enabled medical devices should have access to information regarding the risks of the device, instructions on the safe use of the device, and continuous education on how to interpret device data.  Further, patients and a data specialist suggested that medical device manufacturers should have contact with both patients and providers in case of product recalls, device malfunctions, or necessary software updates.  

Cost and Accessibility

Certain workshop attendees identified issues associated with device costs and accessibility.  For example, some providers and patients shared concerns about insurance coverage of AI/ML-enabled medical devices.  These stakeholders suggested that FDA and device manufacturers should work closely with insurance companies and hospitals to encourage access to transparent cost and coverage information (e.g., whether an AI/ML-enabled device would have any follow-up costs for updates or product changes).  Other stakeholders expressed a desire to have greater insight into product accessibility (e.g., what types of resources, like the Internet or smartphones, would the patient need to use the device properly; would the medical device generally require close proximity to the patient’s provider or hospital). 

Conclusion 

FDA encouraged all interested entities to submit comments concerning the workshop by November 15, 2021.  Comments should be submitted to Docket Number FDA-2019-N-1185 at www.regulations.gov/.  The Agency explained that public comments would help guide potential next steps in the AI/ML regulatory space.  FDA provides an up-to-date list of AI/ML-enabled medical devices here.

FOOTNOTES

[1] A “static” trained device uses a “locked” algorithm in its software, which yields a consistent output each time an identical input is provided.  A “dynamic” trained device uses an “adaptive” or “continuous learning” algorithm in its software, meaning that if the same input is provided, the output may be different each time.

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins