*This post was co-authored by Josh Yoo, legal intern at Robinson+Cole. Josh is not admitted to practice law.
Health care entities maintain compliance programs in order to comply with the myriad changing laws and regulations that apply to the health care industry. Although laws and regulations specific to the use of artificial intelligence (AI) are limited at this time and in the early stages of development, current law and pending legislation offer a forecast of standards that may become applicable to AI. Health care entities may want to begin monitoring the evolving guidance applicable to AI and start integrating AI standards into their compliance programs in order to manage and minimize this emerging area of legal risk.
Executive Branch: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Following Executive Order 13960 and the Blueprint for an AI Bill of Rights, Executive Order No. 14110 (EO) amplifies the current key principles and directives that will guide federal agency oversight of AI. While still largely aspirational, these principles have already begun to reshape regulatory obligations for health care entities. For example, the Department of Health and Human Services (HHS) has established an AI Task Force to regulate AI in accordance with the EO’s principles by 2025. Health care entities would be well-served to monitor federal priorities and begin to formally integrate AI standards into their corporate compliance plans.
- Confidentiality and Security:Federal scrutiny of the privacy and security of entrusted information extends to AI’s interactions with data as a core obligation. This general principle also manifests in more specific directives throughout the EO. The EO also orders the HHS AI Task Force to incorporate “measures to address AI-enhanced cybersecurity threats in the health and human services sector.”
- Transparency: The principle of transparency refers to an AI user’s ability to understand the technology’s uses, processes, and risks. Health care entities will likely be expected to understand how their AI tools collect, process, and predict data. The EO envisions labeling requirements that will flag AI-generated content for consumers as well.
- Governance:Governance applies to an organization’s control over deployed AI tools. Internal mechanical controls, such as evaluations, policies, and institutions, may ensure continuous control throughout the AI’s life cycle. The EO also emphasizes the importance of human oversight. Responsibility for AI implementation, review, and maintenance can be clearly identified and assigned to appropriate employees and specialists.
- Non-Discrimination: AI must also abide by standards that protect against unlawful discrimination. For example, the HHS AI Task Force will be responsible for ensuring that health care entities continuously monitor and mitigate algorithmic processes that could contribute to discriminatory outcomes. It will be important to permit internal and external stakeholders to have access to equitable participation in the development and use of AI.
National Institute of Standards and Technology: Risk Management Framework
The National Institute of Standards and Technology (NIST) published a Risk Management Framework for AI (RMF) in 2023. Similar to the EO, the RMF outlines broad goals (i.e., Govern, Map, Measure, and Manage) to help organizations address and manage the risks of AI tools and systems. A supplementary NIST “Playbook” provides actionable recommendations that implement EO principles to assist organizations to proactively mitigate legal risk under future laws and regulations. For example, a health care organization may uphold AI governance and non-discrimination by deploying a diverse, AI-trained compliance team.
Privacy and Security Laws
The design, deployment, and use of AI will have implications for a health care entity’s specific obligations under key federal privacy and security laws. The Health Insurance Portability Accountability Act (HIPAA)) and 15 U.S.C. Sec. 45(a)(1) (Section 5) generally govern data practices by health care entities.
- HIPAA Privacy Rule: HIPAA generally limits a covered entity’s use and disclosure of electronically transmitted or maintained protected health information (PHI) to certain discrete, permitted purposes, such as treatment, payment of health care operations, or other purposes authorized by a patient. Effective HIPAA compliance requires an organization to map, monitor, and control the flow of PHI in accordance with permitted access, use, and disclosure. AI will likely strain existing privacy controls because AI tools and AI’s utility often involve access to and the processing of large amounts of data that may exceed existing HIPAA privacy standards, such as the minimum necessary standard. Organizations will need to consider methods to segment AI’s access to PHI, ensure lawful processing, and avoid inappropriate use by and disclosure to third-party developers.
- HIPAA Security Rule: HIPAA also requires health care entities to safeguard PHI with administrative, physical, and technical safeguards. AI has already begun to significantly disrupt existing security standards in a number of ways, including the creation of software vulnerabilities. Bad actors also leverage their own AI to bypass cybersecurity protections. Effective compliance programs will likely need to adapt to the reality of AI as an emerging cybersecurity risk.
- Section 5: The regulation of AI extends beyond PHI to personal information. Under Section 5, the Federal Trade Commission (FTC) may consider and pursue as illegal “unfair or deceptive acts or practices in or affecting commerce” that cause or are likely to cause reasonably foreseeable injury. AI access to personally identifiable data, such as personal and/or health information maintained by health care applications, could trigger Section 5 liability under theories of deception and unfairness, as well as liability under the FTC’s data breach rule where personal information is impermissibly used to train the model. Notably, this information cannot be easily removed once the model has been trained. AI’s potential to compromise existing cybersecurity could also result in liability. Finally, companies developing AI tools must comply with Section 5 with respect to the claims they make regarding their tools’ capabilities.
Pending Legislation
Congressional activity has exploded with AI’s increasing prevalence. Legislative proposals are highly varied, ranging from the establishment of an AI legislative committee to required studies on AI’s environmental impact. Several bills, including the following, may substantively impact AI compliance efforts if signed into law.
- Transparency Report: The Artificial Intelligence Research, Innovation, and Accountability Act of 2023 would require providers to disclose comprehensive transparency reports about their AI if the tool has a significant effect on an individual’s access to health care.
- Mandatory RMF Compliance: The AI Foundation Model Transparency Act of 2023 would direct the FTC to engage in rulemaking to formalize compliance with the NIST RMF or another federally approved technical standard.
- AI Labels: The AI Labeling Act of 2023 and the AI Disclosure Act of 2023 would both require disclosures for AI-generated content and empower FTC enforcement under FTC Section 5 for noncompliance.
Key Takeaways
As AI becomes further intertwined with health care, health care entities must strategize and plan for AI in their compliance infrastructure. Like any other tool, AI has its benefits and drawbacks that need to be considered in compliance efforts. As a practical matter, health care entities’ compliance efforts would reasonably include the following:
- Inventory Existing and Upcoming AI Use: AI functions will likely intersect with various internal and external systems when supporting a health care entity’s service. To effectively integrate and monitor AI’s impact on existing systems, health care entities may wish to begin inventorying the existing and upcoming use of AI in their organizations and conduct data mapping, risk assessments, and audits to understand and prepare to strengthen organizational compliance before taking on AI risk.
- Education: Although current guidance on AI is unsettled, wise health care entities will be on the lookout for key updates. Under the EO’s direction, HHS intends to implement health care-centric AI regulations and industry guidance through the AI Task Force and Office of the Chief Artificial Intelligence Officer (OCAIO). Industry segment-specific compliance program guidance from the Office of the Inspector General (OIG) may also focus on AI’s use in health care. Meanwhile, NIST’s National Artificial Intelligence Advisory Committee will provide general recommendations for AI built upon the RMF.
Adaptation: Compliance plans that do not take into account the risk of AI are likely to become outdated with the rise of an evolving body of laws applicable to AI. Health care entities need to prepare for organizational change and consult with legal counsel before implementing new AI tools and navigating the numerous compliance requirements that currently exist and those that may be implemented in the near future. Health care entities’ ability to adapt to emerging laws and regulations applicable to AI will enable them to manage and mitigate the inevitable risks associated with AI tools and systems.