HB Ad Slot
HB Mobile Ad Slot
Digital Health Checkup (Part Three): Key Questions About AI, Data Privacy, and Cybersecurity
Tuesday, December 5, 2017

In the third installment of our series, we consider some additional key questions about Artificial Intelligence (AI), data privacy, and cybersecurity that companies across the life sciences and technology sectors should be asking to address the regulatory and commercial pieces of the complex digital health puzzle.

AI, Data Privacy, and Cybersecurity

1. Which data privacy and security rules apply?
There currently is not a specific law or regulation governing the collection, use, or disclosure of data for AI or the cybersecurity of AI technologies. As a result, digital health companies must assess how existing privacy and security rules will be interpreted and applied in the context of AI.

The applicable laws and regulations governing data privacy and security depend on a variety of criteria, including where you are located and where you are offering the AI technology.

Here are a few regional considerations for AI in the U.S. and data privacy and cybersecurity in the EU and China:

United States

Because large datasets of information typically are necessary to train and test AI technologies, digital health companies that are developing or utilizing AI should consider whether individuals receive adequate notice and are provided appropriate choices over how their information is collected, used, and shared for such purposes. For example, a person might have different expectations about how their information is being collected and used depending on whether they are communicating with a digital health AI assistant provided by a hospital, pharmaceutical company, or running shoe manufacturer. Consequently, providers of such technologies should consider clearly and prominently explaining who is operating the assistant and the operator’s information practices.

Depending on whether and to what extent you have a business relationship with or obtain information from a healthcare provider or other covered entity in order to develop or implement your AI, you may need to comply with the more specific privacy and data security requirements contained in HIPPA and state medical privacy laws in California and Texas.

Similarly, the collection and use of genetic information, biometric identifiers and information (based, for example, on facial recognition or scans, fingerprints, or voiceprints) trigger a patchwork of other federal and state laws.

The United States also regulates the security of connected products and the networks and systems on which they rely. The FTC historically has been the primary enforcement agency responsible for ensuring the “reasonableness” of product, system, network, and data security under Section 5 of the FTC Act. The FDA also has published pre- and post-market guidance on cybersecurity expectations with respect to connected medical devices. Both the FTC and the FDA recognize that responsibility for ensuring consumers against cyber threats applies to the entire product lifecycle—from initial design through vulnerability assessments, patching, and end-of-life considerations.

European Union

If you have a presence in the EU, offer services or goods there, or monitor the behavior of individuals there, you may be subject to the new EU General Data Protection Regulation (“GDPR”; see our checklist on this topic)—a complex law backed by fines of up to 4 percent of global annual turnover (or €20,000,000), obligations to appoint local representatives and data protection officers, etc. It contains strict limits and conditions on the collection, use, and sharing of health data, genetic data, and biometric data, and requires extensive internal policies, procedures, and even the building of “data portability” features allowing individuals to export their data to rival services.

The EU’s “cookie rule” also prohibits most storage of data to, or reading of data from, Internet-connected devices without prior informed consent. Finally, many EU countries also have confidentiality rules that further restrict the collection and use of patient data, plus detailed health cybersecurity rules, such as a French law that requires all services hosting patient data to have first obtained Ministry of Health accreditation.

China

Healthcare data is also considered sensitive in China, and will soon be subject to more stringent requirements under the Information Security Technology – Personal Information Security Specification, in addition to existing data protection and cybersecurity obligations imposed by China’s Cybersecurity Law (see our recent post on this topic).

China also has regulations governing medical records and population health information, such as the Medical Institution Medical Records Administrative Rules and the Administrative Measures for Population Health Information.

Best Practice: Identify the jurisdictions that you operate in or offer your services, and those that present the highest risk to your company. Then assess what data you collect and the purposes for which you use it to identify which specific laws and regulations apply.

2. How do you ensure that you have the necessary rights to collect, use, and disclose data in connection with your AI technologies?
When collecting information directly from users of the AI, you should be transparent about the types of information you collect, how you use it, whether and to whom you disclose it, and how you protect it. It is critical that these disclosures be accurate and include all material information.

When developing, training, and testing AI technologies, companies also look to existing data sources. If the company is using personal data that it previously collected, it should consider whether individuals had sufficient notice that the information would be used for purposes of developing and improving new kinds of digital health solutions. When obtaining this information from third-party sources, the company should consider contractual representations and warranties that ensure all necessary notices, consents, rights, and permissions were provided and obtained to permit the company to use the data as contemplated in the agreement.

In some cases, it also might be appropriate to provide users choice over how their information is collected, used, and shared. In the EU, for example, the GDPR outlaws consent statements that are buried in small print: for digital health purposes, consent will need to be clear, granular, and specifically opted into in order to be valid. In the EU, regulators also are starting to hold recipients liable for inadequate due diligence—merely obtaining contractual assurances from data sources may not be enough.

Best Practice: Notice typically is provided through a privacy policy, but the interactive nature of AI technologies mean that innovative just-in-time disclosures and privacy choices might be possible.

3. What are the fairness and ethical considerations for AI and data analytics in digital health solutions?
To maximize the potential of artificial intelligence and big data analytics, it is important to ensure that the data sets that are used to train AI algorithms and engage in big data analytics are reliable, representative, and fair.

Example: Some diseases disproportionately impact specific populations. If the data sets used to train the AI underlying your digital health offerings are not representative of these populations, the AI might not be effective. It also is critical that the data sets underlying your AI and data analytics are secured against unauthorized access or misuse.

In its report on “big data,” the FTC cautions companies to consider whether data sets:

  • are representative;

  • whether data models account for biases;

  • whether predictions based on big data are accurate; and

  • whether reliance on big data raises other ethical or fairness concerns.

Best Practice: Some companies are forming internal committees to ensure that their use of AI and data analytics is ethical and fair.

The EU also has detailed privacy rules impacting big data and AI. For instance, it grants all individuals a right to human review of fully automated decisions based on analysis of their data—and in many cases prohibits the basing of such decisions on sensitive data, such as their health data, ethnicity, political opinions, geneticsetcIt also outlaws any disclosure or secondary use of data originally collected for a different purpose, unless certain conditions are met, including that it meets the conditions to be considered “compatible” with the original uses (e.g., the new use must be within the individuals’ reasonable expectations).

Of note, if the digital health solution is potentially regulated by the FDA or an equivalent regulatory body, there may be additional pre-market and post-market considerations (e.g., validation of clinical recommendations using AI, adverse event reporting; see our earlier checkup on this topic).

Digital Health Checkup: Key Questions Market Players Should Be Asking (Part One)

Digital Health Checkup (Part Two): Key Commercial Questions When Contracting for Digital Health Solutions

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins