HB Ad Slot
HB Mobile Ad Slot
Top Ten Legal Considerations for Use and/or Development of Artificial Intelligence in Health Care
Tuesday, February 16, 2021

The purpose of this article is to provide an overview of the top ten legal issues that health care providers and health care companies should consider when using and/or developing artificial intelligence (AI). In particular, this article summarizes, in no particular order:

  1. statutory, regulatory and common law requirements;

  2. ethical considerations;

  3. reimbursement issues;

  4. contractual exposure;

  5. torts and private causes of action;

  6. antitrust issues;

  7. employment and labor considerations;

  8. privacy and security risks;

  9. intellectual property considerations; and

  10. compliance program implications.

That’s a long list. However, we will attempt to break down these considerations and briefly summarize them as described below.

1. Statutory, Regulatory and Common Law Requirements

Regardless of whether you encounter AI as a health care provider or a developer (or both), there are statutory, regulatory and common law requirements that may be implicated when considering AI in the health care space. Depending on the functionality that the AI is discharging, there could be state and federal laws that require a health care provider or an AI developer to seek licensure, permits and/or other registrations (for example, AI may be employed in a way that requires FDA approval if it provides diagnosis without a health care professional’s review). Additionally, as AI functionality expands (and potentially replaces physicians in the provision of physician services), the question may be raised as to how these services are regulated, and whether the provision of such services would be considered the unlicensed practice of medicine or in violation of corporate practice of medicine prohibitions.

2. Ethical Considerations

Where health care decisions have been almost exclusively human in the past, the use of AI in the provision of health care raises ethical questions relating to accountability, transparency and consent. In the instance where complex, deep-learning algorithm AI is used in the diagnosis of patients, a physician may not be able to fully understand or, even more importantly, explain to his or her patient the basis of their diagnosis. As a result, a patient may be left not understanding the status of their diagnosis or being unsatisfied with the delivery of their diagnosis. Further, it may be difficult to establish accountability when errors occur in diagnosis as a result of the use of AI. Additionally, AI is not immune from algorithmic biases, which could lead to diagnosis based on gender or race or other factors that do not have a causal link to the diagnosis.

3. Reimbursement Issues

The use of AI in both patient care and administrative functions raises questions relating to reimbursement by payors for health care services. How will payors reimburse for health care services provided by AI (will they even reimburse for such services)? Will federal and state health care programs (e.g., Medicare and Medicaid) recognize services provided by AI, and will AI impact provider enrollment? AI has the potential to affect every aspect of revenue cycle management. In particular, there are concerns that errors could occur when requesting reimbursement through AI. For example, if AI is assisting providers with billing and coding, could the provider be at risk of a False Claims Act violation as a result of an AI error? In the inevitable event that an error occurs, it may be ambiguous as to who is ultimately responsible for such errors unless clearly defined contractually.

4. Contractual Exposure

  • Both as a developer of AI or a health care provider utilizing AI, it is important to have clearly articulated contracts governing the sale and use of AI technology. Important contractual terms include:

  • Expectations regarding services — what are the specific performance metrics that are expected to be satisfied?

  • Representations and warranties — a buyer will expect to have strong representations and warranties, and developers will need to determine what level of representations and warranties are appropriate.

  • Indemnification — both a buyer and developer will need to negotiate how risk is allocated.

  • Insurance — because services performed by AI will have the same or similar risks as if a human counterpart were performing the services, a buyer will want to insure its business to cover those same risks.

  • Changes in law — because AI is rapidly developing, parties should be prepared for changes in law affecting their contractual arrangements, and provide for flexibility or contingencies.

5. Torts and Private Causes of Action

If AI is involved in the provision of health care (or other) services, both the developer and provider of the services may have liability under a variety of tort law principles. Under theories of strict liability, a developer may be held liable for defects in their AI that are unreasonably dangerous to users. In the case of design defects, a developer may be held liable if the AI is inadequately planned or unreasonably hazardous to consumers. At least for the near term, the AI itself probably will not be liable for its acts or omissions (but recognize that as AI evolves, tort theories could also evolve to hold the AI itself liable). As a result, those involved in the process (the developer and provider) will likely have exposure to liability associated with the AI. Whether the liability is professional liability or product liability will likely depend on the functions the AI is performing. Further, depending on how the AI is used, a provider may be required to disclose the use of AI to their patients as a part of the informed consent process.

6. Antitrust Issues

The Antitrust Division of the Department of Justice (the DOJ) has made remarks regarding algorithmic collusion that may impact the use of AI in the health care space. While acknowledging the fact that algorithmic pricing can be highly competitive, the DOJ has acknowledged that concerted action to fix prices may occur when competitors have a common understanding to use the same software to achieve the same results. As a result, the efficiencies gained by using AI with pricing information, and other competitive data, may be offset by the antitrust risks.

7. Employment and Labor Considerations

The use of AI in the workforce will likely impact the structure of employment arrangements as well as employment policies, training and liability. AI may change the structure of the workforce by increasing the efficiencies in job performance and competition for those jobs (i.e., less workforce members are necessary when tasks are performed more quickly and efficiently by AI). However, integration of AI into the workforce also may create new bases for litigation and causes of actions based on discrimination in hiring practices. If AI is used in making hiring decisions, how can you ensure decisions based on any discriminatory characteristics are removed from the analysis? AI also may affect the terms of the employment and independent contractor contractual agreements with workforce members, particularly with respect to ownership of intellectual property, restrictive covenants and confidentiality.

8. Privacy and Security Risks

Speaking of confidentiality, the use and development of AI in health care poses unique challenges to companies that have ongoing obligations to safeguard protected health information, personally identifiable information and other sensitive information. AI’s processes often require enormous amounts of data. As a result, it is inevitable that using AI may implicate the Health Insurance Portability and Accountability Act (HIPAA) and state-level privacy and security laws and regulations with respect to such data, which may need to be de-identified. Alternatively, an authorization from the patient may be required prior to disclosure of the data via AI or to the AI. Further, AI poses unique challenges and risks with respect to privacy breaches and cybersecurity threats, which has an obvious negative impact on patients and providers.

9. Intellectual Property Considerations

It is of particular importance for AI developers to preserve and protect the intellectual property rights that they may be able to assert over their developments (patent rights, trademark rights, etc.) and for users of AI to understand the rights they have to use the AI they have licensed. It also is important to consider carefully who owns the data that the AI uses to “learn” and the liability associated with such ownership.

10. Compliance Program Implications

As technology evolves, so should a provider’s compliance program. When new technology such as AI is introduced, compliance program policies and procedures should be updated based on the new technology. In addition, it is important that the workforce implementing and using the AI technology is trained appropriately. As in a traditional compliance plan, continual monitoring and evaluation should take place and programs and policies should be updated pursuant to such monitoring and changes in AI.

We predict that as the use and development of AI grows in health care, so will this list of legal considerations.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins