HB Ad Slot
HB Mobile Ad Slot
The BR Privacy & Security Download: February 2024
Thursday, February 8, 2024

STATE & LOCAL LAWS & REGULATION

New Jersey Enacts Comprehensive Privacy Law 
New Jersey has become the fourteenth state to enact a comprehensive privacy law (S332). S332 will apply to entities doing business in New Jersey that either control or process the personal data of (i) at least 100,000 New Jersey consumers or (ii) at least 25,000 New Jersey consumers and derive revenue or receive a discount on the price of any goods or services from the sale of personal data. Like its predecessors, S332 provides the rights to access, delete, correct, and opt out of the processing for sale, targeted advertising, and profiling. S332 also requires opt-in consent to process sensitive data and data protection assessments for certain processing activities. Like Colorado and California, a universal opt-out mechanism must be provided, and the New Jersey Attorney General will be able to promulgate regulations. S332 will come into effect on January 15, 2025, and a thirty-day cure period will be provided during the first eighteen months.

NYDFS Proposes Guidance for Insurer Use of AI 
The New York State Department of Financial Services (“NYDFS”) issued a Proposed Insurance Circular Letter (“Proposed ICL”) to all insurers authorized to write insurance in New York State, licensed fraternal benefit societies, and the New York State Insurance Fund, offering guidance regarding the use of artificial intelligence (“AI”) systems and external consumer data and information sources (“ECDIS”) in insurance underwriting and pricing. The Proposed ICL builds upon and clarifies the NYDFS’s interpretation of existing disclosure and transparency obligations previously outlined in the NYDFS’s 2019 Insurance Circular Letter No. 1 (“2019 ICL”), provides guidance on industry standards, best practices, and compliance obligations under current laws and regulations (e.g., vendor oversight and governance and risk management obligations), and outlines the NYDFS’s enforcement priorities.

California Attorney General Announces CCPA Investigative Sweep of Streaming Services 
California Attorney General Rob Bonta announced that his office is conducting a California Consumer Privacy Act (“CCPA”) investigative sweep of businesses with streaming services. Specifically, the California Attorney General’s sweep will focus on compliance with the CCPA’s opt-out requirements for selling and sharing consumer personal information. The Attorney General’s announcement reminds businesses that sell personal information or share personal information for targeted advertising purposes must provide consumers with a right to opt-out, that the right should “be easy and involve minimal steps,” and have the choice honored across multiple devices if they are logged into their account.

CPPA Launches New Website for Consumers 
The California Privacy Protection Agency (“CPPA”), an enforcement authority for the California Privacy Rights Act (“CPRA”), launched a new resource dedicated to informing Californians about their privacy rights. The website is designed to help Californians understand their rights under the CPRA and how to submit a complaint if they believe a business has violated these rights. The website also provides resources for businesses to better understand their obligations under the CPRA. The website further features guidance on a range of privacy concerns, including other government agencies that can help if an individual is the victim of a data breach, identity theft, and other privacy harms.

FEDERAL LAWS & REGULATION

FTC Publishes Technology Blog on AI Privacy and Confidentiality 
The Federal Trade Commission (“FTC”) published a blog post addressed to “model-as-a-service” artificial intelligence (“AI”) companies, reminding them to uphold the privacy commitments they make to customers. “Model-as-a-service” companies develop and host AI models to provide to third parties via an end-user interface or an application programming interface (“API”). The FTC notes that “model-as-a-service” companies have a business incentive to continuously ingest data to train their AI models. The FTC cautions that if such companies choose to retain consumer data for AI training purposes without notice or obtaining express, affirmative consent, they are at risk of violating consumer protection laws. The FTC reminds companies that misrepresentations made in privacy commitments and failure to disclose material facts to consumers are subject to FTC enforcement, and misrepresentations, material omissions, and misuse of data associated with the training and deployment of AI models can violate antitrust laws.

Department of Defense Publishes Proposed Cyber Rule 
The U.S. Department of Defense (“DOD”) published its proposed rule on the Cybersecurity Maturity Model Certification (“CMMC”) program and eight guidance documents for the CMMC. The CMMC provides three tiers of assessment against cyber requirements based on the sensitivity of information handled by the contractor. Level 1 requires an assessment for compliance with 15 cyber security requirements under the Federal Acquisition Regulations Basic Safeguarding of Covered Contractor Information Systems requirements. Contractors reporting against Level 1 requirements will be required to submit annual self-assessments to the DOD. Level 2 requires an assessment against the cybersecurity control requirements of the National Institute of Standards and Technology’s (“NIST”) Special Publication 800-171 on Protecting Controlled Unclassified Information. Level 2 cloud providers must meet at least the “moderate” level of the Federal Risk and Authorization Management Program. Most Level 2 contractors are expected to be required to have an independent third party conduct the assessment. Level 3 contractors will be required to submit to an assessment by the Defense Industrial Base Cybersecurity Assessment Center, a unit of the Defense Contract Management Agency against the 35 enhanced security requirements of NIST Special Publication 800-172.

U.S. Department of Commerce Publishes Proposed “Know Your Customer” Rules for IaaS Providers 
The U.S. Department of Commerce Bureau of Industry and Security issued a proposed rule aimed at preventing foreign actors from utilizing U.S. Infrastructure as a Service (“IaaS”) products (i.e., cloud computing services) to engage in malicious cyber-enabled activity, specifically by imposing certain due diligence and reporting requirements on U.S. IaaS providers and their foreign resellers. The proposed rule would require U.S. IaaS providers to implement a customer identification program, empower the U.S. Department of Commerce to prohibit or restrict access to U.S. IaaS products by certain foreign persons or persons in certain foreign jurisdictions, and require reporting of known instances of foreign persons training large artificial intelligence models “with potential capabilities that could be used in malicious cyber-enabled activity” (e.g., social engineering attacks or denial-of-service attacks). Click here to read more about the proposed rule from Blank Rome’s International Trade team.

FTC Enters Multilateral International Cooperation Agreement 
The Federal Trade Commission (“FTC”) announced its participation in the Global Cooperation Arrangement for Privacy Enforcement (“Global CAPE”), a non-binding international and multilateral arrangement under the global Cross-Border Privacy Rules, that enables data privacy and security authorities of Global CAPE participants to cooperate with other participants’ investigations and cross-border data protection and privacy enforcement actions. The FTC’s participation in the Global CAPE cooperation is intended to increase the FTC’s effectiveness in enforcing U.S. consumer privacy and security laws by allowing Global CAPE participants to support the FTC’s legal actions and efforts addressing privacy and data security-related law enforcement issues without first negotiating a separate memorandum of understanding. Global CAPE participants do not include Asian Pacific countries.

NIST Publishes Initial Draft Guidance on Information Security Measures 
The National Institute of Standards and Technology (“NIST”) published for public comment initial working drafts of Volume 1 – Identifying and Selecting Measures (“Volume 1”) and Volume 2 – Developing an Information Security Measurement Program (“Volume 2”) of NIST Special Publication (SP) 800-55, the Measurement Guide for Information Security (collectively, “Draft Guide”). The Draft Guide offers actionable guidance on how organizations can develop information security measures to evaluate in-place security policies, procedures, and controls. Volume 1 outlines a flexible approach to the development, selection, and prioritization of such measures, while Volume 2 offers a methodology to develop and implement a structure for an information security measurement program. Public comments on the Draft Guide must be submitted to cyber-measures@list.nist.gov by March 18, 2024, and reviewers are strongly encouraged to use comment templates for Volume 1 and Volume 2.

House Lawmakers Propose Bill to Protect Individuals from AI-Generated Fake Images 
House lawmakers have proposed a bill named the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act (“No AI FRAUD Act”), which is intended to address the current patchwork of state laws and set up baseline protections for Americans. Under the No AI Fraud Act, AI replication would only be permitted if an individual is 18 or older and represented by counsel in an agreement or represented by a collective bargaining agreement. The bill also addresses First Amendment concerns by including a provision that alleged harm should be weighed against whether the AI-generated replica is transformative or protected commentary on a matter of public concern. The bill would hold parties liable for $50,000 per violation, plus profits generated from the illegal use of an individual’s likeness. In addition, unauthorized AI creations copying an individual’s voice or likeness would be liable for $5,000 per violation, plus profits generated from the illegal use of the individual’s likeness.

U.S. LITIGATION

Closely Watched Cyber Insurance Dispute relating to War Exclusion Settles 
Perhaps the most watched insurance coverage case involving a cyberattack abruptly ended last month when it settled. In Merck & Co. v. ACE American Insurance Co., Merck sought coverage from its property insurers for more than $1.4 billion in business interruption losses it suffered caused by the 2017 NotPetya attack, which some governments attributed to Russia’s military. Denying coverage, Merck’s insurers asserted that the exclusion for warlike and hostile actions precluded coverage for Merck’s losses. In a significant victory for policyholders, a New Jersey trial court ruled in Merck’s favor, holding that the antiquated wording of the war exclusion did not extend to cyberattacks like NotPetya. Rather, it was limited to traditional forms of military hostilities. Last year, the New Jersey Appellate Division affirmed Merck’s victory and the insurers took appeal to the New Jersey Supreme Court. While pending before the New Jersey Supreme Court, the parties settled on the eve of oral argument. Although the New Jersey Supreme Court will not have a chance to weigh in, the Merck case and the issues it raised are driving insurers to rethink the war exclusion. In 2023, Lloyd’s of London began requiring its members to include exclusions in cyber policies for losses arising from state-backed cyberattacks. Following suit, other cyber insurers are also adopting exclusions. The insurers are concerned about protecting themselves from systemic losses from a cyber event. However, these changes to the war exclusion raise questions for policyholders about how insurers will apply them and whether they will leverage them to overly restrict coverage. One important issue relates to how to prove the attribution of a cyberattack to a sovereign when most cyberattacks are carried out under subterfuge. Policyholders should pay close attention if their insurers are seeking to modify the war exclusion in their policies and should push back on modifications that overreach and scrutinize attempts by insurers to apply the exclusion to limit or deny coverage.

District Court Rejects Efforts to Temporarily Block FTC Revision of Meta Privacy Settlement 
A federal district court denied Meta’s motion seeking an injunction pending its appeal of the court’s ruling that it lacked jurisdiction to stop the Federal Trade Commission (“FTC”) from reopening its administrative proceedings relating to an FTC Order from 2020 requiring Meta to pay a $5 billion penalty for a number of alleged occurrences of data misuse. The FTC moved to amend the Order in May, accusing Meta of breaching the terms of the Order by misleading users about Facebook Messenger Kids communications and the access that outside app developers have to platform data. The FTC is seeking to add new requirements to the Order, including prohibiting Meta from monetizing data collected from users under the age of 18. The court found that Meta failed to show it would be harmed if the FTC was not enjoined and failed to show that the balance of the equities tipped in its favor as required for injunctive relief or that the injunctive relief would be in the public interest. Meta has also filed a separate lawsuit attaching the constitutionality of the FTC’s administrative enforcement powers.

Incognito Mode Suit Settled for Undisclosed Amount 
Google LLC and its parent company, Alphabet Inc. (collectively, “Google”) agreed to undisclosed settlement terms in the class-action Brown, et al. v. Google LLC, et al., first filed in 2020 based on claims that Google intentionally misled “millions” of consumers to believe that they could “browse the Web privately” through Google Chrome’s Incognito Mode when, in reality, Google used advertising technologies (e.g., Google Analytics and Google Ad Manager) to identify, track, and collect those users’ personal information and other intimate details. The complaint sought $5 billion in damages, which included at least $5,000 per user for violations of federal wiretapping laws and California’s state privacy laws beginning on June 1, 2016.

Hospitals Seek to Enjoin HHS Online Tracking Enforcement 
Seventeen hospital associations, collectively representing thousands of hospitals and health systems across twenty states, filed an amicus brief in support of Plaintiffs in American Hospital Association, et al., v. Becerra, et al., seeking to enjoin the U.S. Department of Health and Human Services (“HHS”) from enforcing rules outlined in a December 2022 Bulletin, including restrictions on the use of online tracking technologies by entities regulated under the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) and other broad restrictions on regulated entities’ use and disclosure of patients’ individually identifiable health information (“IIHI”). The hospital organizations argue, among other things, that the prohibited practices under the rules would result in greater harm to the public as healthcare providers will struggle to provide accurate information and vital resources to the public, including to combat misinformation, and make it harder to identify and address disparities in the population. Other health organizations are also expected to file proposed briefs in support of similar arguments.

District Court Dismisses Illinois Biometric Privacy Try-on Class Action 
Proposed class action claims against Estee Lauder and its various brands were dismissed by U.S. District Judge Lindsay C. Jenkins. The plaintiffs’ claims alleged that technology permitting customers to “try on” Estee Lauder makeup products unlawfully captured and used biometric data in photos uploaded by the plaintiffs under the Illinois Biometric Information Privacy Act. The try-on tool allows consumers who visit various Estee Lauder brands’ websites to “try on” eyeshadow, lipstick, blush, and other products by uploading an image of the consumer. The tool then uses facial mapping to detect facial characteristics and superimposes the product on the user’s face. In her order, Judge Jenkins found that the plaintiffs fell short of stating specific factual allegations that Estee Lauder is capable of determining identities, “whether alone or in conjunction with other methods or sources of information available.”

U.S. ENFORCEMENT

FCC Chairwoman Sends Letters to Automakers and Telecommunications Companies on Safe Connected Cars 
The Federal Communications Commission (“FCC”) Chairwoman Jessica Rosenworcel sent letters to the largest U.S. automakers and certain telecommunications companies alerting them of the fact that connected cars have been used by abusers to stalk and harass domestic abuse survivors. Last year, the FCC was charged with implementing the Safe Connections Act, which provides the FCC with the authority to assist domestic abuse survivors with secure access to communications. The letter notes that automakers have been reluctant or unwilling to assist victims of this abuse or to restrict abusive partner access to the car’s connectivity and data—particularly when a victim co-owns the vehicle or is not named on its title. The Chairwoman asks automakers for details on the connectivity options that they pre-install or plan to pre-install in the future in any vehicles they sell in the U.S. and how they plan to support domestic abuse survivors disconnect from their abusers.

FTC Enforcement Action Bans Use and Sale of Sensitive Location Data by Data Broker 
The FTC entered a Proposed Order against X-Mode Social, Inc., the second largest U.S. location data broker, and its successor, Outlogic, LLC (collectively, “X-Mode”), in connection with X-Mode’s illegal collection, compilation, and sale of consumers’ precise geolocation data without disclosing the purposes for which consumers’ data would be used and without obtaining such consumers’ informed consent to do so, as outlined in the FTC’s Complaint. X-Mode also failed to implement policies to remove sensitive location data (e.g., places of worship or reproductive care facilities) from the raw data it sold and failed to employ necessary technical safeguards and oversight to ensure compliance with consumer privacy laws. In addition to other provisions, the Proposed Order secures the FTC’s first-ever ban on the use and sale of sensitive location data.

FTC Bans Data Aggregator from Selling or Licensing Precise Location Data 
The FTC announced an enforcement action barring data aggregator InMarket Media (“InMarket”) from selling or licensing precise location data used for marketing and targeted advertising. The FTC alleged that InMarket violated Section 5 of the FTC Act by failing to adequately inform consumers and obtain their consent before collecting and using their location data. In addition, InMarket retained geolocation data for five years, which increased the risk of data exposure. The deal to resolve the FTC claims will prohibit InMarket from selling or licensing precise location data. InMarket will also be barred from selling, licensing, transferring, or sharing any product or service that categorizes or targets consumers based on sensitive location data. Finally, InMarket will be required to strengthen provisions for consumers, including deleting or destroying all location data previously collected and any products produced from this data unless it has obtained consumer consent or can ensure the data has been de-identified or rendered non-sensitive.

New York Attorney General Settles with New York-Presbyterian Hospital 
The New York Attorney General announced a $300,000 settlement with New York-Presbyterian Hospital for disclosing the protected health information (“PHI”) of website visitors in violation of the Health Insurance Portability and Accountability Act (“HIPAA”). The hospital’s website, which allows visitors to book appointments, search for doctors, learn about the hospital’s services, and research information about symptoms and conditions, used third-party tracking technologies (e.g., pixels). Such technologies collected information, including IP addresses and the URLs of the web pages clicked on, which, in some cases, revealed health conditions. This information was shared with third-party website analytics and digital advertising service providers for marketing purposes. In June 2022, a journalist reported this issue, and in March 2023, the hospital formally reported the incident affected over 54,000 people. As a result of the settlement, the hospital has agreed to change its policies, secure the deletion of PHI, and maintain enhanced privacy safeguards and controls.

New York Attorney General Reaches $1.2 Million Agreement with Health Care Provider For Data Security Lapses 
New York Attorney General Letitia James announced that her office entered into an agreement with Refuah Health Center, Inc. (“Refuah”) for failing to maintain appropriate controls to protect and limit access to sensitive data. In May 2021, Refuah experienced a ransomware attack where a cyber-attacker accessed the data of approximately 250,000 New Yorkers, including names, addresses, phone numbers, social security numbers, and other personal health-related information. The Office of the Attorney General concluded that Refuah failed to decommission inactive user accounts, rotate user account credentials, and restrict employees’ access to only resources and data necessary for their business functions. As a result of the agreement, Refuah agreed to invest $1.2 million to develop and maintain stronger information security programs to protect patient data. In addition, Refuah agreed to pay the state $450,000 in penalties and costs.

FTC Announce Claims Process for Consumers Affected by CafePress’ Data Breach 
In 2022, the FTC finalized a Consent Order with CafePress for a data breach CafePress suffered in 2019 that revealed more than 180,000 unencrypted Social Security numbers. As part of the Consent Order, CafePress agreed to pay the FTC $500,000, which the FTC will use to compensate individuals affected by the breach. The FTC is notifying the individuals eligible to receive payment via email and/or mail – CafePress was required under the Consent Order to “directly or indirectly provide sufficient customer information to enable the [FTC] to efficiently administer consumer redress to shopkeepers who did not receive payable commissions because they closed their account.” Affected individuals can also apply to receive a payment if they were “misled” by CafePress’s data security claims and had their Social Security Number exposed in the breach. Eligible individuals can file a claim online here.

INTERNATIONAL LAWS & REGULATION

EDPB Publishes One-Stop-Shop Case Digest on the Security of Processing and Data Breach Notification 
The European Data Protection Board (“EDPB”) published a one-stop-shop (“OSS”) case digest reviewing OSS decisions relating to the EU General Data Protection Regulation’s (“GDPR”) Article 32 (security of processing), Article 33 (notification of a personal data breach to the supervisory authority), and Article 34 (communication of a personal data breach to the data subject). The GDPR’s OSS mechanism requires cooperation between a lead supervisory authority and other concerned supervisory authorities. The case digest illustrates how supervisory authorities have worked together under the OSS to adjudicate cases relating to security and data breach notification and provides a description of and links to decisions that provide valuable compliance guidance for these requirements.

Quebec Privacy Commissioner Publishes Guidance for Businesses on Privacy Notices 
The Quebec Commission d’accès à l’information du Québec published the first in a promised series of guidance for businesses intended to make it easier to understand and comply with new obligations under Quebec’s new Law 25. The first of the series covers writing privacy notices and provides guidance and information on what a privacy notice should contain and how to write a privacy notice in simple and clear terms that are easily understood by data subjects. Law 25, which entered into force September 22, 2023, includes significant new privacy requirements for businesses processing the personal data of individuals in Quebec, including enhanced transparency requirements and requirements to conduct privacy impact assessments before transferring personal data outside of Quebec.

Ontario Privacy Commissioner Publishes Guidance on Penalties under the Personal Health Information Protection Act 
The Office of the Information and Privacy Commissioner of Ontario (“IPC”) published guidance on how it will issue and determine administrative monetary penalties for violations of Ontario’s Personal Health Information Protection Act (“PHIPA”). The PHIPA governs the collection, use, and disclosure of health information in the health sector. Penalties may be assessed up to a maximum of $50,000 for individuals and $500,000 for organizations. The guidance provides examples of past cases that may have been good candidates for the issuance of administrative monetary penalties, such as serious snooping into patient records by individuals working at a health system, contraventions for economic gain such as improperly accessing and disclosing personal health information for the purpose of selling products and services, and disregarding an individual’s right of access. The guidance also reviews the factors that PHIPA regulations require the IPC to consider in determining the amount of penalties.

Ana Tagvoryan, Jason C. Hirsch, Tianmei Ann Huang, Amanda M. Noonan, and Karen H. Shin also contributed to this publication.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins