Artificial Intelligence (AI) can be employed in a healthcare setting for a variety of tasks, from managing electronic health records at a hospital, to market research at a benefits management organization, to optimizing manufacturing operations at a pharmaceutical company. The level of regulatory scrutiny of such systems depends on their intended use and associated risks.
In the U.S., for medical devices using AI, one of the key regulatory bodies is the Food and Drug Administration (FDA), especially its Center for Devices and Radiological Health (CDRH). CDRH has long followed a risk-based approach in its regulatory policies, and has officially recognized ISO Standard 14971 “Application of Risk Management to Medical Devices.” That standard is over 10 years old now, and therefore is currently undergoing revisions – some of which are meant to address challenges posed by AI and other digital tools that are flooding the medical-devices arena.
One of the areas where AI can provide a tangible improvement in health care is pattern recognition –either based on a predefined algorithm or, more powerfully, based on machine learning. This capability means that in a medical setting, AI could be applied to interpreting images, and thus serve as a diagnostic tool. According to the guidance “Software as a Medical Device (SAMD): Clinical Evaluation,” which was adopted by FDA at the end of last year, software that is intended to “treat or diagnose” is considered to represent a higher risk (and consequently is subject to more stringent regulatory oversight) than those that “drive” or “inform” clinical management.
Considerations for AI training and testing
The key shifts in regulatory thinking in response to the latest technological developments were recently discussed at the International Conference on Medical Device Standards and Regulations that was organized by the American Association of Medical Instrumentation(AAMI), in collaboration with the US FDA and the British Standards Institute (BSI). Our team attended the conference.
FDA representatives at the AAMI conference stressed the importance of ensuring that AI delivers accurate, reliable output under a realistically wide range of conditions that may be encountered in the real world. Speakers explained that the quality of decisions made by an AI system depends on the training set of data, the algorithm enabling the learning, and the validation set of data. Crucially, the validation dataset should not overlap with the testing dataset, although currently it frequently does because relevant medical data are still limited. For the data to be usable for AI training and testing, there are a number of considerations. In particular, the data must be:
- “numerous/voluminous” and available in machine-readable form
- accessible to the software developers with appropriate permissions/release by the data owners
- accompanied by complete and well-structured metadata (including information about the method and instruments that were used in obtaining the data such as medical images)
- representative of the breadth and variety of patient types and disease conditions that the proposed AI system is intended for
Such data are hard to come by, and even the realization of these requirements is only now beginning to reach the software community, just as calls for more diligence in developing diagnostic AI tools are growing in urgency. One of the challenges in accessing sufficient amounts of relevant medical data is privacy of medical records, although blockchain or similar technologies might in the future help obviate that obstacle.
FDA guidance and Congressional action
FDA has been actively developing guidance documents for AI and other digital-health issues, conducted a pilot pre-certification program with participation of software developers, and is holding regular public workshops. The agency scientists also contribute and sometimes lead the development of international standards on these and related topics, for example through the International Organization for Standardization (ISO), the International Medical Device Regulators Forum (IMDRF) and the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH).
Finally, Congress also is taking a closer look at AI. Several bills have been introduced, one of which would require the Secretary of Commerce to establish a Federal Advisory Committee to study the implementation of AI. If enacted, the committee would consist of representatives of the academic community, private industry, civil rights communities, and labor organizations. The committee would be tasked with investigating how AI will affect the workplace, individual privacy rights, ethical considerations, and how to encourage AI innovation. Several federal agencies would have non-voting roles on the committee, including the National Institute of Standards and Technology, the National Science Foundation and the Federal Trade Commission.
Coupled with the activities at the FDA and other agencies, it is likely that AI will continue to grow in importance over the next few years.