HB Ad Slot
HB Mobile Ad Slot
Artificial Intelligence in Health Care Delivery: Where Might it Take Us, and What Happens if We Get There?
Tuesday, March 14, 2017

It is difficult to avoid the specter of “artificial intelligence” (AI) these days, and for those working in the health care sector, there is no exception. Health care delivery has been impacted by a variety of tools that use some form of AI for many years. Further, recent advances in hardware and computer science development have increased the likelihood of even more significant impact. Today, some health diagnostic tools using advanced AI systems in research settings perform as well as their human counterparts, and sometimes better; in the future, performance will continue to improve, and the scope of activities subject to this kind of automation will likely increase.

Currently, advanced AI systems are being used in health care delivery settings in very discrete ways, and are designed to provide practitioners with more and better information with which to make decisions. The function of the practitioner remains unchanged and familiar—the practitioner is the final authority with respect to the provision of diagnostic and treatment service. The practitioner’s judgment remains paramount.

It is, therefore, easy to be lulled into a false sense of security regarding the application of legal and regulatory standards with what seems to be nothing more than the addition of another tool in a practitioner’s bag. There are, however, reasons to take a more critical view and to consider what the future may hold—because the future and the expanded potential of AI systems in health care deliver is likely not as far away as we might think.

Whose Judgment?

Any professional is held to a standard of care that, at its most fundamental level, recognizes that the professional will exercise his or her judgment in the performance of the profession. That judgment is exercised in the context of past and ongoing learning, training, experience and the utilization of existing tools that assist the professional in exercising that judgment. The professional takes the facts and circumstances and weighs various conclusions regarding a course of action. In this context, AI tools can be extremely beneficial—particularly in the health care context—as they can speed analysis, expand the knowledge base of the provider and speed the review of vast amounts of data.

For liability and licensure purposes, however, the practitioner must never lose sight of his or her own responsibility and always exercise independent judgment. The practitioner must not delegate to the AI system the essential function of being a licensed professional and making the final call. While this may be a simple concept to express, as AI system functionality continues to improve, and expand on their clinical diagnostic and even treatment plan capabilities, it may be a harder concept to implement as time goes on.

Experience indicates that technology will continue in its development as a ubiquitous tool. We accept information technology into our professional and personal lives with ease. Studies indicate that younger generations adopt technology with ease and confidence; and demand that these technologies be made available to them in a variety of contexts. It appears that, unless there is a law against it—and even if there is—someone is going to build an app, and people will use it. The health care sector is not immune to this trend, even though the significant regulatory environment makes rapid and systemically valuable adoption difficult. This pressure for adoption will only increase as AI systems continue to develop, improve and demonstrate their effectiveness in the health care delivery setting.

Clearly, there is nothing wrong with relying on proven technology; but at what point do we, as a society, accept that proven technology can replace the judgment of a licensed professional? If an AI system proves to be more effective and reliable than a human physician at a certain function, then should we not rely on the AI system’s judgment?

Oversight and Standards of Care

Regulation is largely about allocating responsibility among actors, and ensuring that certain actors have the requisite skills, knowledge, assets, qualifications or other protections in place given the nature of what they are doing. We regulate health care practitioners, financial institutions, insurers, lawyers, automobile salesmen, private investigators and others because we believe, as a society, that these actors—human or corporate—in exercising their judgment should be held to heightened standards. Accordingly, not only are these actors subject to potentially more exacting standards of care, but also they frequently must be licensed, demonstrate a certain financial stability or otherwise prove a degree of trustworthiness.

Similarly, we hold products to a more exacting standard. In health care in particular, not only do we require medical devices to prove their efficacy and safety, but we also require that their manufacture adhere to certain quality standards. Further, certain products may be held to a standard of “strict liability” if they do not function properly. Accordingly, the developers or manufacturers of these products face significant liability for the failures of their products.

A health care practitioner’s standard of care is an evolving standard, and one that does not exclude the appropriate utilization of technology in the care setting—indeed, it may eventually require it if the technology establishes itself in common usage. It is possible to foresee advanced, judgment-rendering AI systems integrated into the care setting. The question we must ask is whether our existing legal and regulatory tools provide an appropriate and effective environment in which these tools are deployed.

Allocating Responsibility

At some point, under some circumstances, AI systems will start to look more like a practitioner than a device—they will be capable of, and we will expect them to, render judgments in a health care context. The AI system will analyze symptoms, data and medical histories and determine treatment plans. In such a circumstance, is it fair to burden the treating practitioner with the liability for the patient’s care that is determined by the AI system? Are existing “product liability” standards appropriate for such AI systems?

This latter question is relevant given the “black box” nature of advanced AI systems. The so-called AI “black box” references the difficulty or inability to access the workings of the AI system, as we may otherwise do with other software systems. The reason for this is the nature of some of these AI systems, which are frequently systems that utilize neural networks. In essence, neural networks are large layers of interconnected nodes. The nodes are subject to generally fairly simple functions, but by inputting a great deal of information and “training” the network, these relative unsophisticated interconnected layers of nodes can produce remarkably sophisticated results.

While these neural networks can produce excellent results, their “reasons” for coming up with a conclusion are not easily discernable. In a sense, these new AI systems become functional and effective in a manner similar to the manner a human does. We learn, for example, what a dog is by seeing dogs and being told that these are dogs. The same is true of neural network AI systems. While we may learn later that dogs share common features and are categorized in certain ways, we recognize dogs on the basis of experience; and the same is true for AI systems. Unpacking why an AI system misidentifies a cat as a dog, accordingly, can be very difficult—in essence it is an exercise in understanding why a neural network rendered a judgment.

In this context, it is fair to ask whether a judgment-rendering AI system in any sector should be held to the same standard as other products. We may want to consider any number of factors and actors when determining how to allocate responsibility. We may want to allocate responsibility among the practitioner, the developer of the AI system, the “trainer” of the AI system, the “maintainer” of the AI system and, perhaps, the AI system itself.

How to Regulate: Start Asking the Questions

Achieving a reasonable approach to effectively regulate new dynamics in health care delivery will require thinking carefully about how best to regulate AI systems and care delivery as this technology continues to advance and become capable of taking over certain “professional” functions. A number of factors must be taken into consideration.

1. The existing oversight regime: Right now, the US Food and Drug Administration (FDA) regulates the manufacture and distribution of medical devices, including software, intended to be utilized for the diagnosis, treatment, cure of prevention of disease or other condition. FDA approval or clearance of a medical device does not necessarily limit physician utilization of a product for off-label purposes. State medical boards regulate the practice of medicine, assuring that the physician community is adhering to appropriate standards of care. Both of these regulatory bodies will need to review their approaches to regulation in order to effectively integrate AI systems into care delivery.

2. The ubiquity of AI in the IoT: While some AI systems may be discrete pieces of system machinery, it would be a mistake to ignore AI penetration into the “internet of things” (IoT) and the larger digital landscape, and the related penetration of the IoT into health care through patient engagement, data tracking, wellness and preventative care tools and communication tools. In short, we need to recognize that increasingly sophisticated levels of “health care” are increasingly taking place away from hospitals, doctors’ offices and other traditional care settings, and not under the direct supervision of health care professionals.

Directly related to this, of course, is the utilization and maintenance of data. AI health care tools integrated within the IoT will likely be privy to massive amounts of data—but under what approval and subject to what restrictions? Frequently, even the most isolated AI tools in health care rely on massive data sets. Accordingly, data privacy and security issues will increase in importance and consideration of how the existing privacy and security regulatory regime applies to advanced AI systems will necessitate a forward-thinking approach.

3. Liability and insurance: Given the black box nature of AI systems and the unique ways they are developed to perform their functions, determining liability becomes complicated. As noted above, different actors, performing different functions come into play, and the role and function of the health care practitioner may begin to change given the nature of some of these AI systems. The practitioner may take on responsibility for AI system training or testing, for example. How liability should be allocated in such a complex environment is a difficult question, and one that will likely evolve over time.

Further, the standards for liability may need to be reconsidered, and the standards of care for the delivery of care may need to undergo radical transformation in the future if AI systems prove themselves able to function at a higher level of quality than their human counterparts. As the interaction between human physician and AI systems evolve, the standard of care can easily become quite complex and confusing.

4. The robot doctor: Political and legal theorists are already seriously contemplating imbuing AI systems with legal attributes of personhood. While fully sentient and sapient robots may be far off in the future, legal rights and responsibilities do not require such features. For example, we provide corporate bodies and animals with legal rights and responsibilities. We also discriminate among different groups of “people” (e.g., between citizens and non-citizens; between pets and wild animals). In fact, the notion of rights and responsibilities for an AI system may assist in designing an appropriate regulatory environment.

Similarly, we may borrow from human-based liability standards to evaluate whether an AI system caused event is actionable. Given the manner in which neural networks are trained and their black box nature, a “reasonable robot” standard of care may become an effective way in determining whether a wrong has occurred.

The Future of Health Care Professionals

We have not yet reached the age of the machine, and our health care is still best served by rich engagement between patient and well-educated, trained and equipped health care professionals. Health care professionals have the opportunity to shape the way AI systems can best be used in care delivery, and also to shape the way future AI systems can best be utilized in the future as they continue to improve and evolve.

They should take advantage of this opportunity.

The author would like to thank Stephen W. Bernstein, Bernadette M. Broccolo, Shelby Buettner, Jennifer S. Geetter, Vernessa T. Pollard and Erin L. Thimmesch for their comments and suggestions for this article.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins