If there is one thing that everyone can agree on, it is that health is an important factor in life. In 2021, over 3 million people died in the United States with heart disease and cancer topping the list. Despite all of the scientific research and development, disease is still a major threat with room for improvement when it comes to diagnostic and treatment standards. Artificial intelligence (AI) and machine learning which is all about data and information can be trained using specialized programs to analyze patient data to provide a quick and accurate diagnosis. In 2016, Congress enacted the Cures Act, which provided for FDA oversight of certain types of AI software as a medical device.
According to the World Bank there were approximately 2.6 physicians for every 1000 people in the United States. The U.S. faces a projected shortage of between 37,800 and 124,000 physicians within 12 years, according to The Complexities of Physician Supply and Demand: Projections From 2019 to 2034, a report released by the Association of American Medical Colleges (AAMC). Physicians are under pressure to provide appropriate and timely care to a growing patient population. Physicians routinely critique vast quantities of patient records that document symptoms, patient histories, scan results, and lab test results. Such work can be time-consuming and lead to error. AI would not replace physicians, but it would complement their work, supporting the decisions by making and offering recommendations based on the analysis of healthcare data.
“Diagnostic errors contribute to approximately 10 percent of patient deaths,” according to the Institute of Medicine at the National Academies of Science, Engineering and Medicine (NASEM), “and also account for 6 to 17 percent of hospital complications.” These issues are not generally due to blatant physician negligence but are caused by a number of factors including communication failure, human error, and ineffective teamwork between organizations. AI can potentially help with all of these factors.
AI would not replace physicians, but it would complement their work, supporting the decisions by making and offering recommendations based on the analysis of healthcare data.
How effective are AI diagnostics?
Studies demonstrate tremendous promise in the ability of AI to accurately diagnose diseases. A recent Journal of the National Cancer Institute study showed AI could perform at radiologist-like level and detect breast cancer comparable to an average radiologist. When detecting eye problems, AI was found to be “as good as the best human experts at detecting eye problems” and could identify more than 50 different eye diseases. And, per a study conducted by the University of Florida, researchers used AI to predict which patients would develop Alzheimer’s disease up to five years prior to the actual diagnosis.
Improved Treatment and Outcomes
AI can also help physicians improve treatment and obtain better patient outcomes. AI has the ability to analyze copious amounts of treatment and patient data (more than any human could possibly process efficiently) and recommend treatment plans and follow-up actions that showed success in the past for similar patient groups.
AI Integration in Healthcare
In September 2022, the Government Accountability Office (GAO) published a report titled TECHNOLOGY ASSESSMENT: Artificial Intelligence in Health Care Benefits and Challenges of Machine Learning Technologies for Medical Diagnostics. One of the biggest challenges identified to the integration of AI was medicolegal challenges. As AI becomes incorporated into patient care, questions arise about how the use of AI is regulated and who is liable when something goes wrong. Software used to run AI is a product, so product liability is a possible cause of action. Because the information, data, diagnoses, etc., generated by AI are relied upon by physicians who use that information to diagnose and treat patients, medical malpractice is another possible cause of action.
Some AI software is subject to FDA regulation as a medical device. Section 3060 of the Cures Act specified when software will be regulated as a medical device and subject to FDA review, and when it will not. On December 8, 2017, the FDA issued “Software as a Medical Device (SAMD): Clinical Evaluation” to explain its thinking on regulation of software. Software as a medical device (SaMD) is
described as software that utilizes an algorithm (logic, set of rules, or model) that operates on data input (digitized content) to produce an output that is intended for medical purposes as defined by the SaMD manufacturer. The risks and benefits posed by SaMD outputs are largely related to the risk of inaccurate or incorrect output of the SaMD, which may impact the clinical management of a patient.
For medical devices, the FDA Act provides some liability protection. Specifically, 21 USC § 360k(a), preempts state requirements, including product liability claims which are “different from, or in addition to” the FDA regulation and “which relate[] to the safety or effectiveness of the device. . . .“ In Riegel v. Medtronic, Inc., 552 U.S. 312, 330 (2008), the U.S. Supreme Court has upheld this preemption which might offer incentive to doctors who would like to integrate emerging AI technologies into their patient care.
Nonetheless, doctors who rely upon AI to help diagnose and treat their patients could still be subject to medical malpractice liability. Medical malpractice liability is premised on a doctor’s breach of the standard of care. Even if a doctor relied on AI, she can still be held liable if a jury were to find that her actions fell below the standard of care. As noted in the GAO Report, perhaps, in the not too distant future, numerous AI technologies will be integrated into the standard of care. If that were to occur, then doctors might have more certainty that their reliance upon AI would not potentially subject them to liability.
Melissa Ratliff, RN, BSN, Legal Nurse Consultant, also contributed to this article.