This is the third article in a series: Part I and Part II.
Software developers are racing to develop health care products that leverage artificial intelligence (AI), including machine learning and deep learning. Examples include software that analyzes radiology images and pathology slides to help physicians diagnose disease, electronic health records software that automates routine tasks, and software that analyzes genetic information to support targeted treatment. The one thing that all of these products have in common is a need to interact, in some way, with real world medical data. However, this real world data can be protected by the Health Insurance Portability and Accountability Act of 1996 (HIPAA) as well as a patchwork of federal and state laws and regulations. Below we discuss the contexts in which developers may encounter these laws, as well as strategies to navigate related legal issues.
Access to Data for AI Development Purposes
Much of the AI-related development in health care involves machine learning and deep learning algorithms that require large quantities of images or other real world data in order to "train" the technology. Often, this real world data satisfies HIPAA’s definition of “protected health information” or “PHI,” and the health care providers and insurers that hold this data—referred to by HIPAA as “covered entities”—are limited in their ability to transfer the PHI to third parties, including developers. So how might a developer obtain the data it needs to build out its AI algorithms?
-
De-identification by Covered Entity
One approach for obtaining the necessary data is for the developer and the covered entity to enter into a data sharing agreement. Under the agreement, the developer would compensate the covered entity for collecting and de-identifying the data for the developer. Once the covered entity de-identifies the data, the data is no longer considered PHI. At that point, the covered entity can share the de-identified data with the developer without violating HIPAA. While this may sound easy, de-identifying a massive trove of PHI is a complex undertaking, and missteps could lead to a reportable breach under HIPAA and other laws. Nevertheless, this type of arrangement is attractive to developers since they will have avoided needing to comply with HIPAA.
-
De-identification by the Developer
If a covered entity is unwilling or unable to de-identify the data, the developer could step into that role and de-identify the data itself as a service provider to the covered entity. This approach requires the developer to act as a “business associate” of the covered entity, and would require the parties to enter into a business associate agreement. A downside to this approach—at least from the developer’s point of view—is that HIPAA directly regulates business associates. The developer would need to comply with certain provisions of HIPAA and would need to satisfy, among other things, the HIPAA Security Rule and all of its attendant obligations, including requirements to conduct risk assessments of the developer’s systems, train employees, and create policies and procedures to protect the data. Standing up and maintaining a HIPAA compliance program is nothing to scoff at in terms of time and resources.
In a twist on the above scenario, developers might be able to leverage an existing business associate relationship with a covered entity in order to obtain de-identified data. For example, a developer that provides a covered entity with a standalone device or service could negotiate the ability to de-identify PHI that the developer comes into contact with throughout the course of the underlying relationship. Depending on the terms of the agreement, the de-identified data could be used to improve the product or to create new products.
Access to Data for Product Support
After successfully developing and launching a product, developers may need ongoing access to PHI in order to provide the actual AI-supported product to patients, to service the product or to continue training the machine learning system with data. For example, a developer might need access to a covered entity’s computer network in order to troubleshoot software that analyzes patient data. Direct access to systems containing PHI will likely necessitate a business associate agreement and the compliance infrastructure described earlier.
Other Contexts
The discussion above focuses on the developer’s access to PHI for the purpose of AI development and support. However, developers should be aware that other activities could trigger HIPAA as well as other laws. For example, if the developer releases a smartphone application that creates, receives, maintains or transmits PHI on behalf of a covered entity or other business associate, then HIPAA might apply to the developer. We discussed these issues in our prior series examining HIPAA and other privacy considerations when building a health app.
Considerations
Developers should work to identify the types and sources of data that their AI-driven product will need throughout its lifecycle. This is crucial to determining the legal regimes that might apply and the contractual arrangements that must be put in place. While HIPAA is the most frequently discussed law in this space, other laws and regulations might also apply depending on the nature of the data at issue. For example, 42 C.F.R Part 2 could be implicated by a product that requires access to substance use disorder treatment information. State and local privacy laws could also be implicated, as well as the General Data Protection Regulation (GDPR) if personal data from individuals in the European Union is involved. Developers should engage counsel early in the development and commercialization processes in order to understand how their data-related decisions will affect their legal and compliance responsibilities. Counsel can also ensure that contractual arrangements with sources of the data appropriately address the developer’s rights, obligations and legal liability.