HB Ad Slot
HB Mobile Ad Slot
The BR Privacy & Security Download: August 2024
Thursday, August 8, 2024

STATE & LOCAL LAWS & REGULATION

CPPA Publishes Proposed Data Broker Law Regulations
The California Privacy Protection Agency (“CPPA”), the regulatory agency tasked with enforcing the California Consumer Privacy Act (“CCPA”), has begun the formal rulemaking process for the Delete Act. The Delete Act amended California’s data broker registration law to require data brokers to register with the CPPA, delete all personal information related to a consumer who has made a delete request under the CCPA, report specific information on data collection and consumer request metrics, and undergo independent compliance audits every three years. The CPPA published a Notice of Proposed Rulemaking and Proposed Regulations to implement the Delete Act. The Proposed Regulations set forth the administrative procedures for registering as a data broker and clarify the definitions of a data broker and minor. The CPPA will accept written comments on the Proposed Regulations by, and hold a virtual public hearing with respect to the Proposed Regulations on, August 20, 2024.

California Automated Decisionmaking Technology Regulations Don’t Progress to Formal Rulemaking
The California Privacy Protection Agency (“CPPA”) did not advance its draft automated decisionmaking technology (“ADMT”) regulations to the formal rulemaking progress at the CPPA’s July meeting. The CPPA has been working on highly anticipated regulations on ADMT, cybersecurity audits, and risk assessments, all of which continue to be in the “pre-step” phase before commencement of formal rulemaking according to the CPPA’s general counsel. The CPPA released updates to existing regulations and the proposed ADMT, cybersecurity audit, and risk assessment regulations, as well as an initial statement of reasons and preliminary economic analysis related to the proposed rulemaking. The CPPA is not expected to begin its formal rulemaking process for the draft regulations, including the draft regulations regarding ADMT, until September at the earliest. As a result, the proposed regulations are not expected to be finalized in 2024.

Google Announces End of Effort to Phase Out Cookies
Google announced that it has ended plans to phase out third-party cookies in its Chrome browser. Google stated that it will continue to work on alternatives in its Privacy Sandbox and shift to an updated approach to privacy-preserving user tracking that “lets people make an informed choice that applies across their web browsing.” Google has been working on its third-party cookie phase out plans for five years. In January 2024, the U.K. Competition and Markets Authority ordered Google to pause cookie deprecation efforts over competition concerns with proposed alternatives.

Delaware Launches Personal Data Privacy Portal
The Delaware Department of Justice’s Fraud and Consumer Protection Unit launched a new personal data privacy portal to help Delaware consumers and businesses prepare for the implementation of Delaware’s Personal Data Privacy Act (the “Act”), which goes into effect January 1, 2025. The consumer portal includes simple definitions and explanations about Delawareans’ rights over their personal data. Similarly, the business portal provides a flow chart to help entities (including non-profits) determine if they may be regulated under the Act, examples of personal data and sensitive personal data, and general guidance for any “Controller” or “Processor” subject to the Act. The parents and kids portal provides very limited information and directs users to check back “soon” for more information on protecting kids’ personal data. All portals include a “Helpful Resources” section with links on topics such as scams, consumer protection videos, telemarketing, and more.

Washington, D.C. AG Introduces Consumer Health Data Privacy Law
District of Columbia Attorney General Brian L. Schwalb announced the introduction of the Consumer Health Information Privacy Protection Act (“CHIPPA”), a bill to regulate the collection, sharing, use, and sale of consumer health data not covered by the federal Health Insurance Portability and Accountability Act (“HIPAA”), such as health data collected by tech companies and developers of fitness and wellness apps or online patient support groups. CHIPPA, if passed, will require non-governmental entities doing business in D.C. to create and publish a consumer health data privacy policy and adhere to consumers’ rights and restrictions over their health data. CHIPPA would further prohibit geofencing around locations where healthcare services are delivered. Violations of CHIPPA will be considered deceptive trading practices pursuant to D.C. 456 Official Code § 28-3904, which provides consumers a private right of action.

New York Attorney General Releases Guides on Website Privacy for Consumers and Businesses
The New York Attorney General announced the release of two privacy guides on the Attorney General’s website. The guides are intended for a business audience and a consumer audience, respectively. The business guide is intended to help businesses identify common mistakes when deploying tracking technologies. The consumer guide offers tips consumers can use to protect privacy when browsing the web. Common mistakes cited in the business guide include failing to tag or miscategorizing tracking tools so that the tools are not properly managed by websites’ consent management tools, use of hard-coded tags that do not work with the businesses’ consent management tools, posting privacy notices that are confusing or misleading, and having privacy choice interfaces that are confusing. The business guide also includes recommendations for using privacy-related disclosures and controls. The New York Attorney General’s office stated that the guides were issued following a review of more than a dozen popular websites, collectively serving more than 75 million visitors per month, which found unwanted tracking.

LA Superior Court Closes as Result of Ransom Attack
The Superior Court of Los Angeles closed all 36 courthouse locations across Los Angeles County, including both internal (e.g., case management systems) and external (e.g., court websites and jury duty portals) systems, following an unprecedented ransomware cyberattack on July 19, 2024. The network was restored to resume basic functionality on July 23, 2024, and full operations on July 29, 2024. Officials have not released any information as to what data may have been breached, citing the “sensitivity of the investigation.” However, an early statement suggested that the preliminary investigation showed no evidence of court users’ data being compromised.


FEDERAL LAWS & REGULATION

U.S. Senators Urge FTC to Investigate Automakers Sharing of Driver Information with Data Brokers
U.S. Senators Ron Wyden, D-Ore., and Edward J. Markey, D-Mass., wrote a letter urging the Federal Trade Commission (“FTC”) to investigate automakers disclosure of data to their data broker partners. The Senators’ letter included details regarding the sharing of drivers’ data with data brokers obtained as a result of previous inquiries from the Senators directed at automakers. The privacy practices of automakers have been under increased scrutiny following press investigations into data-sharing practices. Senators Wyden and Markey had also released the results of an investigation into automakers’ disclosure of location data with law enforcement agencies in April. 

House Committee Postpones Markup of the American Privacy Rights Act
The House Energy and Commerce Committee canceled a scheduled meeting to discuss and mark up the American Privacy Rights Act (“APRA”) with no indication of when it would be rescheduled. Two days before the scheduled meeting, the House formally introduced the APRA (H.R. 8818). Notable changes to the APRA as H.R. 8818 include the removal of the requirements to provide opt-outs and conduct impact assessments with respect to covered algorithms that pose a consequential risk of harm. H.R. 8818 also restricts the transfer of a minor’s data to third parties with limited exceptions. H.R. 8818 further clarifies that APRA preempts any conflicting state law providing protections for children or teens but not those offering greater protections.

Senate Committee’s AI Hearing Divided over Federal Data Privacy Law
The Senate Commerce, Science, and Transportation Committee met to discuss the need for a national framework governing how companies collect, use, and share consumers’ personal information in light of the recent developments in artificial intelligence. The committee’s chair, Maria Cantwell (D-Wash) asserted that the American Privacy Rights Act (“APRA”), a bipartisan bill proposing a national data privacy law, would offset the increasing power of corporations to collect personal information to train AI technologies. The APRA, however, has faced significant opposition from Senator Ted Cruz (R-Texas). Cruz opposes various aspects of the APRA, including its data minimization and private right of action provisions. As the Senate continues to debate the APRA provisions, it remains unclear when the APRA will pass through Congress and become law.

NIST Announces Release of Final Drafts of AI Guidance Documents
The U.S. Department of Commerce National Institute of Standards and Technology (“NIST”) announced the release of new, final guidance documents to help improve the safety, security, and trustworthiness of artificial intelligence (“AI”) systems. The releases include two guidance documents designed to help manage the risks of generative AI and serve as companion resources to NIST’s AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF). The third proposes a plan for U.S. stakeholders to work with others around the globe on AI standards. The finalized guidance documents relating to generative AI are the AI RMF Generative AI Profile (NIST AI 600-1), which is intended to help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best align with their goals and priorities. The second finalized generative AI publication, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication (SP) 800-218A), is designed to be used alongside the Secure Software Development Framework.

NIST Releases Small Enterprise Quick Start Guide for Cybersecurity
The National Institute for Standards and Technology (“NIST”) published a Small Enterprise Quick Start Guide designed to help small, under-resourced entities understand the value and core components of the NIST Risk Management Framework (“RMF”) and provide a starting point for designing and implementing an information security and privacy risk management program. The RMF is a comprehensive and repeatable seven-step process that helps organizations identify, manage, assess, and monitor the risks applicable to their business. The Quick Start Guide distills the 183-page RMF into an 11-page document more readily accessible to organizations with lower compliance budgets and capacities. Importantly, the Quick Start guide does not replace the RMF. Instead, it stands to make the RMF more useable for smaller businesses.


U.S. ENFORCEMENT

Texas Finalized Record $1.4 Billion Biometric Privacy Settlement
The Texas Attorney General announced that Meta will pay $1.4 Billion as part of the settlement of its first enforcement action under the state’s Capture or Use of Biometric Identifier Act (“CUBI Act”). The Texas Attorney General sued Meta in 2022, alleging that Meta had unlawfully captured the biometric data of millions of Texans without obtaining their informed consent as required by the CUBI Act. Facebook began the facial recognition technology at issue in 2010 to allow users to tag friends in photos and videos uploaded to the site. Meta’s Facebook used the facial recognition feature until November 2021, following a $650 million class settlement alleging the technology violated Illinois’ biometric privacy law.

Majority of SEC Claims against SolarWinds Dismissed
The U.S. District Court for the Southern District of New York dismissed significant portions of the Security and Exchange Commission’s (“SEC”) fraud and internal accounting controls case against SolarWinds Corp. (“SolarWinds”) resulting from a series of cyber-attacks against the company in 2020. These attacks introduced vulnerabilities into SolarWinds’ software, which caused widespread data breaches among SolarWinds customers, including large swaths of both public and private sector entities. The SEC’s complaint included allegations of (1) insufficient disclosures following discovery of the initial cyber incident, (2) failure to implement appropriate internal accounting controls as required by the Exchange Act, and (3) fraudulent disclosures regarding SolarWinds’ cybersecurity program. The court found no authority “supporting a legal duty to update [SolarWinds’] risk disclosure” where it had not definitively linked two pre-existing data security incidents to reveal a more significant vulnerability prior to the incident becoming publicly known. Therefore, the court held that such allegations “impermissibly rely on hindsight and speculation.” Similarly, the court dismissed the SEC’s argument that SolarWinds’s cybersecurity deficiencies constituted a failure to “maintain a system of internal accounting controls” as defined in Section 13(b)(2)(B) of the Exchange Act. In dismissing these claims as “ill-plead,” the court found that while an issuer’s “system of internal accounting controls” requires the issuer to accurately report, record, and reconcile financial transactions, it “cannot reasonably be interpreted to cover a company’s cybersecurity controls such as its password and VPN controls.” Finally, the court permitted the SEC to proceed with its fraud claims relating to SolarWinds’ statements about its cybersecurity program and practices, which SolarWinds and its Chief Information Security Officer knew to be “deeply flawed.” Click here to learn more about this recent ruling.

FTC Order Bans Anonymous Messaging App from Allowing Users under 18
The FTC has issued a proposed order against NGL Labs LLC (“NGL Labs”), which is the developer of the anonymous messaging app NGL, to resolve claims that the company violated Section 5 of the FTC Act, the Children’s Online Privacy Protection Act, and other federal and state statutes. The FTC claimed that NGL actively marketed its services to children and teens while falsely claiming that its AI content moderation program filtered out cyberbullying and other harmful messages. The FTC also claimed that NGL sent fake messages to trick users into signing up. Under the proposed order, NGL Labs and two of its founders will pay $5 million and be banned from offering the app to anyone under 18. NGL Labs will also be required to implement a neutral age gate that prevents new and current users from accessing the app if they are under 18 and obtain express consent from consumers prior to billing them for a negative option subscription.

FCC Announces Enforcement Action Requiring Implementation of Privacy and Data Protection Measures
The Enforcement Bureau of the Federal Communications Commission (“FCC”) entered into a Consent Decree to resolve an investigation into whether Sorenson Communications, LLC and CaptionCall, LLC violated FCC rules prohibiting the retention of call content beyond the duration of a call and the submission of inaccurate information to the Telecommunications Relay Service Fund Administrator. As part of the settlement, the parties will implement “novel privacy and data protection measures.” These measures include a compliance plan with independent process assessment, internal compliance reporting, internal incident management and response, and an updated vendor management program. They also include a TRS privacy and data protection program, mandating the appointment of a privacy officer, development and implementation of a data retention schedule and data inventory, call content retention procedures, and more. In addition to these requirements, the parties have agreed to drop a series of claims to over $13 million in call captioning compensation, spend at least $4 million on privacy and security enhancements, reimburse the TRS Fund $12 million, and pay a civil penalty of $5 million. This action reveals a continued emphasis on records retention and management, as well as compliance with privacy rules, across sectors.

FTC Seeks Information from 8 Companies Regarding Surveillance Pricing
The Federal Trade Commission (“FTC”) announced that it had issued orders to eight companies offering surveillance pricing products and services that incorporate data about consumers’ characteristics and behaviors. The FTC is seeking information about the potential impact these practices have on privacy, competition, and consumer protection. The orders are aimed at helping the FTC better understand how surveillance pricing is affecting consumers, especially when the pricing is based on surveillance of an individual’s personal characteristics and behavior. The orders seek information regarding the types of products and services being offered, how data is collected and input into the products, the types of customers being targeted, and the impacts of these programs on consumers and prices. This inquiry continues the FTC’s recent focus on commercial surveillance and data security, a primary focus of the FTC since its Advance Notice of Proposed Rulemaking on the subject released in August 2022.

FTC States Hashing Does Not Render Data Anonymous
The FTC reemphasized in its Office of Technology Blog that just because data “lacks clearly identifying information” does not mean that it is anonymous unless “it can never be associated back to a person.” Specifically, the FTC stated that “hashing” personal data—a process that involves taking a piece of data and using math to turn it into a number in a consistent way—does not anonymize data. While hashing “has a nice potential benefit” of obscuring personal data, the FTC stated that it still remains inherently identifiable as it can be un-hashed and re-associated with an individual. This issue of hashed or pseudonymized, as well as the use of unique identifiers that may appear anonymous despite being used to track individual users across platforms and services, poses a significant risk to online businesses both in the regulatory and litigation contexts. Businesses must ensure that their technology departments are aware of the types of identifiable information being collected and disclosed through their websites and service offerings.

FTC, ICPEN and GPEN Announce Results of Dark Pattern Survey
The Federal Trade Commission (“FTC”) announced the results of a review undertaken by the International Consumer Protection and Enforcement Network (“ICPEN”) and Global Privacy Enforcement Network (“GPEN”) regarding the use of dark patterns. The ICPEN reviewed 642 websites and mobile apps that offered subscription services from companies worldwide. ICPEN’s review found that 76 percent of the websites and apps employed at least one possible dark pattern, and nearly 67 percent used multiple possible dark patterns. The dark patterns most often encountered during ICPEN’s review were “sneaking practices” (i.e., the hiding or delaying the disclosure of information that might affect a consumer’s purchase decision), and interface interference techniques (e.g., obscuring important information or preselecting options that frame information to steer consumers toward making decisions more favorable for the business). GPEN’s review, in which the FTC also participated, found that the majority of the websites and apps it examined used at least one potential dark pattern.

HHS Office for Civil Rights Settles with Healthcare Provider over HIPPA Security Rule Failures
The U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) has settled with Heritage Valley Health System (“Heritage Valley”) over potential violations of the Health Insurance Portability and Accountability Act (“HIPAA”) Security Rule. OCR alleged that Heritage Valley potentially violated the Security Rule by failing to (1) conduct a compliant risk analysis to determine risks and vulnerabilities to electronic protected health information (“EPHI”), (2) implement a contingency plan to respond to emergencies that damage systems containing EPHI, and (3) implement policies and procedures to allow only authorized users access to EPHI. Under the settlement agreement, Heritage Valley agreed to pay $950,000 and implement a corrective action plan, which includes conducting a thorough risk analysis to determine vulnerabilities of its EPHI, implementing a risk management plan, reviewing and revising its policies and procedures to comply with HIPAA, and training its workforce on HIPAA policies and procedures. OCR will monitor the corrective action plan for three years. 


U.S. LITIGATION

Supreme Court Strikes Down Chevron Deference
In Loper Bright Enterprises et al. v. Gina Raimondo, the U.S. Supreme Court overturned the test the Court had established in 1984 in Chevron vs. Natural Resources Defense CouncilChevron provided a test for courts to determine whether to review federal agencies’ interpretations of the law in rulemaking, in part instructing courts to defer to the interpretations of federal laws by administrative agencies. In overturning the well-known precedent, Chief Justice John Roberts wrote for the court’s majority that “courts must exercise their independent judgment in deciding whether an agency has acted within its statutory authority, as the [administrative procedure act] requires.” Although the ultimate effect of Loper Bright is unclear, it is possible that litigants will be emboldened to challenge federal agency policies because they feel that the removal of mandatory deference will increase their odds of success.

BIPA Case Settled for $2.4M
Cook County Circuit Judge Michael Mullen gave final approval to a $2.4 million settlement agreement to resolve claims in Jennifer Chatman v. Euromarket Designs Inc. dba Crate & Barrel (“Crate & Barrel”) involving 1,796 class-member employees of Crate & Barrel who were required to scan their finger, hand, or palm to clock in and out of work without first providing their written, informed consent. Named Plaintiff Jennifer Chatman claimed Crate & Barrel and its affiliate CB2 illegally collected employees’ biometric data without informing employees of the purpose of collection, failed to disclose the retention and deletion schedule over such data, and shared employees' data with its third-party payroll vendors without employees' authorization, in violation of the Illinois Biometric Information Privacy Act. Each class member is expected to receive about $860 after other costs, including $847,000 in attorneys’ fees, are distributed.

Class Action Alleging Mass Tracking of Internet Users Settles for $115M
Oracle America Inc. (“Oracle”) agreed to pay $115 million into a non-reversionary cash fund as part of its settlement in Katz-Lacabe et al v. Oracle America Inc., resolving claims that the software company collected and sold internet users’ detailed personal information (e.g., address, race, political views, and purchase history) without their knowledge or consent, in violation of California and federal privacy laws. The proposed class action alleged that, from August 19, 2018, to the date of the action’s final judgment, Oracle collected personal data from approximately 220 million internet users and built digital “dossiers” about those individuals through Oracle ID Graph and other marketing tools. Data and profiles were then sold to third-party marketers directly or indirectly through Oracle’s advertising and marketing products, which allowed third-party data brokers to “traffic in” such data. Plaintiffs’ intrusion upon seclusion claim and alleged violation of the Federal Wiretap Act were dismissed prior to settlement.

North Carolina Judge Approves Settlement in Ransomware Dispute
A U.S. District Court of North Carolina judge approved a $4 million settlement to resolve two class action claims against Eye Care Leaders Holdings LLC and its affiliates (“ECL”). In their complaint, a physician class and a patient class alleged that ECL failed to give truthful, timely notice to its ophthalmology practices and their patients about ransomware attacks that damaged electronic medical records and billing software for months. Under the settlement, the physician class will receive $1.4 million, and the patient class will receive $2.6 million. The settlement also provides for other nonmonetary benefits, including account credits to the physicians, ending collection efforts for unpaid invoices, the ability for physicians to terminate their contracts with CL, and a release of patients’ claims against the physicians.


INTERNATIONAL LAWS & REGULATION

EU AI Act Enters into Force
The EU AI Act was published in the Official Journal of the EU on July 12, 2024, and entered into force on August 1, 2024. The EU AI Act’s entry into force starts the clock for compliance obligations under the EU AI Act, which vary depending upon the type of AI systems. Obligations applicable to prohibited AI systems begin on February 2, 2025. Certain obligations related to general purpose AI models will apply beginning on August 2, 2025. Most obligations under the EU AI Act, including certain rules applicable to high-risk AI systems and systems subject to transparency requirements will become applicable on August 2, 2026. Remaining rules applicable to high-risk AI systems will become applicable on August 2, 2027. To learn more about the EU AI Act, see our article summarizing key concepts here.

Brazilian Data Protection Authority Determines Social Media Company Use of Personal Data to Train AI Violates Data Protection Law
The Brazilian National Data Protection Authority (“ANPD”) issued a provisional order suspending the validity of Meta’s new privacy policy authorizing the use of personal data to train artificial intelligence systems. According to the ANPD’s announcement, the agency initiated an investigation as a result of evidence of violations of Brazil’s General Data Protection Law (“LGPD”). The ANPD stated in its announcement that Meta did not provide adequate notice and information to make data subjects aware of the possible consequences of the processing of their personal data for the development of generative AI models and that Meta’s legal basis for processing personal data for that purpose is inadequate under the LGPD. The ANPD also cited concerns regarding use of personal data from children, such as photos, videos, and posts, to train AI.

Pay or Consent Model Violates the EU Digital Markets Act
The European Commission has informed Meta of its preliminary findings that Meta’s “pay or consent” advertising model fails to comply with the EU’s Digital Markets Act (“DMA”). Under the DMA, gatekeepers (i.e., certain digital platforms designated by the European Commission, which includes Meta) must seek users’ consent for combining their personal data between designated core platform services and other services. Gatekeepers cannot make use of its services conditional on users’ consent. In response to the DMA, Meta introduced a “pay or consent” model whereby EU users of Facebook and Instagram had to choose between: (i) an ads-free version for a monthly subscription fee or (ii) a free version with personalized ads. The European Commission has taken the preliminary view that Meta’s “pay or consent” model violates the DMA, as it forces users to consent to the combination of their personal data and fails to provide them with a less personalized but equivalent version of its social networks.

ECJ Holds Representative Organizations Can Bring Privacy Litigation
The European Court of Justice (“ECJ”) has held that representative organizations can bring privacy litigation on behalf of individuals. In Case Number C‑757/22, the ECJ held that Verbraucherzentrale Bundesverband eV (the Federal Union of Consumer Organizations and Associations), could sue Meta Platforms Ireland Ltd. in Germany on behalf of consumers because Meta failed to secure valid consent to collect users’ data, and such failure came as a result of its processing users’ data in its App Center. Meta simply informed users that if they used a Meta app, they were agreeing to let Meta collect and publish certain personal data. The ECJ held that representative organizations can sue on behalf of data subjects, where the controller fails to inform such data subjects of the purposes for the collection of personal data and the legal grounds for processing it, which is an infringement of the data subject’s rights.

Australian Information Commissioner Commences Civil Penalty Action in Data Breach Case
The Office of the Australian Information Commissioner (“OAIC”) has instituted proceedings against the health insurer Medibank for reportedly failing to protect the personal health data of 9.7 million customers. This action is rooted in the massive 2022 data beach where sensitive client records were exposed online. The OAIC alleges Medibank failed to adhere to the Australian Privacy Principles (“APPs”), specifically APP 11.1, which mandates secure protection of personal information. The potential repercussions for Medibank include significant financial penalties under the Privacy Act 1988, which could amount to up to AU$2.2 million for every instance of a privacy infringement.

Daniel R. Saeedi, Rachel L. Schaller, Ana Tagvoryan, Timothy W. Dickens, Gabrielle N. Ganze, Jason C. Hirsch, Tianmei Ann Huang, Amanda M. Noonan, and Karen H. Shin contributed to this article

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins