On October 30, 2023, President Biden issued the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (AI EO), which has specific impacts on the healthcare industry. We detailed general aspects of the AI EO in a previous blog post.
Some impacts on the healthcare industry have been outlined in a Forbes article written by David Chou. Chou synthesizes the AI EO into four areas of impact for the healthcare industry:
- HHS AI Task Force—after the task force is created, which includes representatives from Health and Human Services, it will “develop a strategic plan with appropriate guidance” including policies, frameworks, and regulatory requirements on “responsibly deploying and using AI and AI-enabled technologies in the health and human services sector, spanning research and discovery, drug and device safety, healthcare delivery and financing, and public health.”
- AI Equity—AI-enabled technologies will be required to include equity principles, including “an active monitoring of the performance of algorithms to check for discrimination and bias in existing models” and “identify and mitigate any discrimination and bias in current systems.”
- AI Security and Privacy—The AI EO requires “integrating safety, privacy, and security standards throughout the software development lifecycle, with a specific aim to protect personally identifiable information.”
- AI Oversight—The AI EO “directs the development, maintenance, and utilization of predictive and generative AI-enabled technologies in healthcare delivery and financing. This encompasses quality measurement, performance improvement, program integrity, benefits administration, and patient experience.” These are obvious use cases where AI-enabled technology can increase efficiencies and decrease costs. That said, the AI EO requires that these activities should include human oversight of any output.
Although these four considerations are but a start, I would add that healthcare organizations (including companies supporting healthcare organizations) should consider looking beyond these basic principles when developing an AI Governance Program and strategy. There are numerous entities regulating different parts of the healthcare industry that provide insight into the use of AI tools, including the World Health Organization, the American Medical Association, the Food & Drug Administration, the Office of the National Coordinator, the White House, and the National Institutes of Standards and Technology. All of these entities have issued guidance and proposed regulations on the use of AI tools in the healthcare space, to address the risks of the use of AI in the healthcare industry, including bias, unauthorized disclosure of personal information or protected health information, unauthorized disclosure of intellectual property, unreliable or inaccurate output (also known as hallucinations), unauthorized practice of medicine, and medical malpractice.
Assessing and mitigating the risks to your organization starts with developing an AI Governance Program. The Program should encompass both the risk of your employees using AI tools and how you are using or developing AI tools in your environment and provide guidance to anyone in the organization who is using or developing AI-enabled tools. Centralizing governance of AI will enhance your ability to follow the rapidly-changing regulations and guidance issued by both state and federal regulators and implement a compliance program to respond to the changing landscape.
The healthcare industry is heavily regulated; compliance is no stranger to it. Healthcare organizations must be prepared to include AI development and use in its enterprise-wide compliance program.