HB Ad Slot
HB Mobile Ad Slot
The BR Privacy & Security Download: June 2023
Friday, June 9, 2023

Welcome to this month's issue of The BR Privacy & Security Download, the digital newsletter of Blank Rome’s Privacy, Security & Data Protection practice. We invite you to share this resource with your colleagues and visit Blank Rome’s Privacy, Security & Data Protection webpage for more information.


STATE & LOCAL LAWS & REGULATIONS

Texas Senate Passes Texas Data Privacy and Security Act

The Texas Legislature passed the Texas Data Privacy and Security Act (“TDPSA”), after rectifying the differences between the Senate’s and House of Representative’s versions of the bill. The TDPSA broadly applies to entities that conduct business in Texas and process or sell personal data but exempts all “small businesses” as defined by the U.S. Small Business Administration. The TDPSA provides Texas consumers with similar rights as other state comprehensive privacy laws – the rights to access, delete, and correct personal data and opt out of the sale of personal data, the processing of personal data for targeted advertising purposes, or profiling in furtherance of a decision that produces legal or similarly significant legal effects concerning the consumer. If enacted, the TDPSA would come into effect on March 1, 2024.

Florida Legislature Approves Digital Bill of Rights

Florida’s legislature passed S.B. 262 to establish the Florida Digital Bill of Rights (“FDBR”). Although the FDBR would establish similar rights and obligations for Florida residents as those found under existing state privacy laws, the FDBR has a significantly narrower scope of applicability, applying to businesses with over $1 billion in global gross revenue that either (i) derive at least 50 percent of such revenue from online advertisement sales, (ii) operate a consumer smart speaker and voice command service, or (iii) operate an app store or digital distribution platform with at least 250,000 different applications. The FDBR also adds biometric and geolocation information to the definition of personal information for purposes of data breach notification obligations. If signed by the governor, the FDBR would go into effect on July 1, 2024

California Privacy Protection Agency Proposes Topics for CPRA Regulations

The California Privacy Protection Agency (“CPPA”), the agency responsible for rulemaking for and enforcing the California Privacy Rights Act (“CPRA”), held a public meeting on May 15, 2023, to discuss potential future rulemaking topics. The topics include: (i) drafting regulations to address business-to-business data and employee data; (ii) revising and/or adding exceptions to the right to limit the use and/or disclosure of personal information (including for HR/employee data); (iii) requiring businesses to comply with the request to opt-out of sale/sharing in a timeframe commensurate to when it sells/shares the personal data; (iv) requiring businesses to provide information about where the consumer can submit a complaint when denying consumer requests; and (v) drafting language to harmonize with the Colorado Privacy Act regulations. The CPPA also discussed the activities of and next steps for its CPRA Rules Subcommittee.

Illinois Legislature Passes Amendments to Right to Privacy in the Workplace Act

The Illinois Legislature has passed amendments to its Right to Privacy in the Workplace Act (S.B. 1515). The amendments require employers to follow specified processes if they choose to take adverse employment action against an employee after receiving notice from any Employment Eligibility Verification System of a discrepancy between an employee’s name or social security number. For instance, if the employer receives notice of a discrepancy from any federal or state agency (e.g., the IRS or Social Security Administration), the employer must provide the following rights and protections to the employee: (i) to choose which work authorization documents to present to the employer during the verification or re-verification process; and (ii) to be represented by counsel or himself or herself in any meetings, discussions, or proceedings with the employer.

New York Attorney General Proposes Bill to Create Cryptocurrency Regulations

New York Attorney General Leticia James proposed new legislation to impose stricter regulations on the cryptocurrency industry. If enacted, the Crypto Regulation, Protection, Transparency, and Oversight (“CRPTO”) Act would expand the oversight of crypto entities operating in New York State. The bill applies to the entire cryptocurrency marketplace by broadly defining what is considered a “digital asset.” The bill would require a wide array of crypto platforms to make public financial statements, enforce industry conflicts of interest, and impose protections for crypto investors. These platforms would also be required to reimburse customers who are victims of fraud and permit the Attorney General to enforce the law by issuing subpoenas, imposing civil penalties, collecting restitution and damages, and shutting down fraudulent businesses.


FEDERAL LAWS & REGULATIONS

FTC Adopts Biometric Policy Statement

The Federal Trade Commission (“FTC”) issued a policy statement (“Statement”) raising significant concerns about consumer privacy, data security, and the potential for bias and discrimination associated with the increasing use of biometric information and related technologies. The Statement warns that false, misleading, or unsubstantiated statements made about the accuracy of biometric information technologies and/or practices may face enforcement action if the FTC determines such actions to be “deceptive” and “unfair” in violation of Section 5 of the FTC Act. In making its determination, the FTC will consider factors such as foreseeability of harms, promptness of actions taken, and the adoption of appropriate data practices, including employee training, use of available tools, and evaluation of third-party service providers. According to the FTC’s Bureau of Consumer Protection Director, the Statement “makes clear that companies must comply with the law regardless of the technology they are using.”

COPPA 2.0 Reintroduced

U.S. Senators Bill Cassidy (R-LA) and Ed Markey (D-MA) reintroduced the Children and Teens’ Online Privacy Protection Act 2.0 (“COPPA 2.0”), which attempts to update privacy protections for children under the Children’s Online Privacy Protection Rule (“COPPA”) and to expand protections to teens. The bill would: (i) prohibit internet companies from collecting personal information of users aged thirteen to sixteen without their consent; (ii) ban targeted advertising to children and teens; (iii) revise COPPA’s “actual knowledge” standard to be a “reasonably likely to be used” by children or teens standard; (iv) create an “Eraser Button” for parents and children to delete personal information from a child or teen when technologically feasible; (v) establish a “Digital Marketing Bill of Rights for Teens” that limits the collection of personal information of teens; and (vi) establish a Youth Marketing and Privacy Division at the Federal Trade Commission.

FTC Proposes Amendments to the Health Breach Notification Rule

The Federal Trade Commission issued a Notice of Proposed Rulemaking (“NPRM”) to amend the Health Breach Notification Rule (“Rule”) and has requested public comment on the proposed changes. The proposed amendments would, among other things: (i) clarify that the Rule applies to health apps and other similar technologies; (ii) clarify that a “breach of security” under the Rule includes any unauthorized acquisition of identifiable health information that occurs as a result of a disclosure (not just limited to cybersecurity intrusions or nefarious behavior); and (iii) expand the content that must be included in the breach notification to affected individuals (e.g., the notice must include information about the potential harm stemming from the breach and the names of any third parties who might have acquired any unsecured personally identifiable health information).

Biden-Harris Administration Announces New Actions for AI Innovation

The Biden-Harris Administration announced new actions intended to promote responsible artificial intelligence (“AI”) innovation. These actions include: (i) the National Science Foundation providing $140 million in funding to launch seven new National AI Research Institutes; (ii) commitments from leading AI developers to participate in a public evaluation of AI systems, consistent with responsible disclosure principles on an evaluation platform developed by Scale AI; and (iii) the Office of Management and Budget releasing draft guidance on the use of AI systems by the U.S. government for public comment. These actions build upon other efforts the Biden-Harris Administration has made for responsible AI innovation, including the Blueprint for an AI Bill of Rights and related executive actions announced last fall, the Executive Order signed in February, the AI Risk Management Framework, and a roadmap for standing up a National AI Research Resource released earlier this year.

Senate Holds Hearing on AI Regulation

The U.S. Senate held a hearing to establish a base knowledge of artificial intelligence (“AI”) in connection with the known and emerging risks of harms posed by AI technologies and the algorithms used, and how to mitigate those harms while maximizing AI’s benefits through regulation. The subcommittee’s inquiry was directed toward Sam Altman, CEO of OpenAI (responsible for the popular generative AI tool, ChatGPT), Christina Montgomery, Vice President of IBM, and Gary Marcus, Professor Emeritus of New York University. The hearing addressed topics, including, but not limited to: (i) establishing key guiding principles; (ii) transparency through licensing and reporting obligations to a governmental oversight agency, consumer disclosures, and independent third-party audits; (iii) risks in employment and education, election safety, consumer privacy, and intellectual property rights; (iv) a new framework of liability standards for AI companies and end-users that would be dissimilar to those established under Section 230 of the Communications Decency Act in connection with social media platforms; and (v) global competitiveness and cooperation.


U.S. LITIGATION

FTC Argues COPPA Does Not Preempt State Law Claims

The FTC filed an amicus brief in support of a ruling by the U.S. Court of Appeals for the Ninth Circuit (“Ninth Circuit”) that COPPA does not preempt state laws claims that are consistent with COPPA. The brief was filed in the class action case Jones v. Google, in which parents, on behalf of their children, alleged that YouTube and certain YouTube channel owners violated state laws by collecting the personal information of children under thirteen through persistent identifiers to track and serve targeted advertising to such children without notifying and obtaining consent to do so from their parents. The U.S. District Court for the Northern District of California initially held that COPPA preempted state law claims, but the Ninth Circuit reversed this decision. Google asked the Ninth Circuit to review the ruling, causing the court to ask the FTC to opine on the matter.

Attorneys General Sue VoIP Company over Robocalls

lawsuit has been filed by fifty Attorneys General’s Offices against Michael D. Lansky, LLC, d/b/a Avid Telecom, and the LLC managers (collectively, “Avid Telecom”), alleging that, between 2019 and 2023, the Voice over Internet Protocol (“VoIP”) service provider initiated and/or facilitated the transmission of over 24.5 billion robocalls, including over 7.5 billion calls to telephone numbers registered on the FTC’s National Do Not Call Registry. The calls included scams ranging from Social Security, Medicare, auto warranty, Amazon, DirecTV, and credit card interest rate reduction, as well as illegal spoofed calling numbers associated with various federal and state law enforcement agencies and established private sector entities. The lawsuit was filed as part of a bipartisan effort by the Anti-Robocall Multistate Litigation Task Force to address illegal robocalling and to ensure compliance with telemarketing and consumer protection laws.

Former Uber CSO Sentenced over Data Breach Cover-up

Former Chief Security Officer of Uber Technologies, Inc. (“Uber”), Joseph Sullivan, was sentenced to three years of probation and ordered to pay a $50,000 fine for his role in the attempt to cover up a 2016 data breach involving the stolen records of about 57 million Uber users and 600,000 drivers’ license numbers. Sullivan joined Uber shortly after the Federal Trade Commission (“FTC”) launched an investigation into a separate 2014 data breach. Sullivan had assisted with the FTC’s investigation and testified under oath about Uber’s cybersecurity program, including the vulnerability that led to both the 2014 and 2016 security incidents, just ten days prior to discovering the 2016 breach. Sullivan attempted to hide the breach from the FTC, Uber’s new management, and the public and arranged to pay off the hackers in exchange for a signed non-disclosure agreement that included false representations.

TikTok Sues Montana over Ban

TikTok, Inc. (“TikTok”) filed a lawsuit seeking declaratory and injunctive relief against the Attorney General of Montana following the passage of S.B. 419, banning the app in the state. The complaint argues that the ban is unlawful and should be enjoined from enforcement because the ban: (i) violates TikTok’s (and the platform’s users’) rights to freedom of speech under the First Amendment; (ii) is preempted by federal law as to the governance of foreign affairs and national security; (iii) violates the Commerce Clause; and (iv) operates as an unconstitutional Bill of Attainder. The action highlights ongoing concerns over Chinese influence and the collection, use, security, and control of U.S. data by foreign actors.

Meta Pushing Back against Texas Attorney General’s Biometric Privacy Suit

Meta filed a motion for partial summary judgment related to a lawsuit launched by Texas Attorney General Ken Paxton in February 2022. The attorney general accused Meta of violating the Deceptive Trade Practices Act (“DTPA”) and the state’s Capture or Use of Biometric Identifier Act (“CUBI”) through its former photo- and video-tagging feature, which used facial recognition technology to identify users. In its motion, Meta argued the attorney general should not be permitted to pursue the claim on behalf of residents not signed up for Facebook or Instagram because the statute only permits civil penalties against wrongdoings committed against a business’s own consumers. Meta also argued that the suit would open the floodgates for Texas to seek penalties based on the state’s entire population, even if the misrepresentation was limited to a limited number of consumers.


U.S. ENFORCEMENT

FTC Premom App Proposed Order Prohibits Data Sharing

The Federal Trade Commission (“FTC”) filed a proposed order in connection with an enforcement action filed against Easy Healthcare Corporation (“Easy Healthcare”), which developed and operates the fertility-tracking Premom mobile application, over allegations that Easy Healthcare engaged in deceptive and unfair practices in violation of Section 5 of the FTC Act and violated the Health Breach Notification Rule by unlawfully disclosing Premom users’ health data, including sensitive personal information and other unsecured health information, to third parties, deceiving users about EasyHealthcare’s data sharing practices, and failing to notify consumers of these unauthorized disclosures. The order permanently prohibits the company from sharing users’ personal health data with third parties for advertising. The order also requires the company to obtain user consent for other sharing purposes, comply with other consumer data privacy and security practices, and pay a $100,000 civil penalty.

Online Education Technology Provider Agrees to Settle COPPA Claims

Edmodo, a defunct online education provider, agreed to settle FTC claims that the business violated the Children’s Online Privacy Protection Act (“COPPA”) by illegally collecting personal data on hundreds of thousands of children under the age of 13 years old without parents’ consent. The $6 million settlement was immediately suspended due to Edmodo’s inability to pay. Under COPPA, schools can authorize the collection of children’s personal information on behalf of parents. However, a website operator must provide notice to the school of the operator’s collection, use, and disclosure practices. The FTC’s complaint alleges that Edmodo did not require schools and teachers to view terms of service before creating an account and attempted to shift the responsibility of obtaining parental consent to schools and teachers. In addition, the company permitted children under the age of 13 to download a free mobile version of Edmodo, which collected device IDs, cookies, and IP addresses used for advertising.

Lender and Mortgage Servicer Fined by New York for Failure to Protect Customer Data

OneMain Financial Group LLC (“OneMain”), which specializes in nonprime lending, was fined $4.25 million by New York’s Department of Financial Services (“DFS”) for failing to adequately protect customer data. The OneMain’s cybersecurity gaps violated DFS’ Cybersecurity Regulation, which requires banks, insurers, and other financial businesses to take steps to strengthen their digital defenses. According to DFS regulators, OneMain stored a list of key passwords in a shared folder named “PASSWORDS” and many of its administrative accounts still used a default password provided by OneMain when users joined the network. OneMain also did not sufficiently manage the risk of exposing data stored with third-party service providers or control which employees had access privileges to its network.

HHS OCR Settles with MedEvolve Following a Breach of Unsecured PHI

The U.S. Department of Health and Human Services’ (“HHS”) Office for Civil Rights (“OCR”) announced a settlement with MedEvolve, Inc. (“MedEvolve”), which provides practice management, revenue cycle management, and practice analytics software services to healthcare entities, for potential violations of the Health Insurance Portability and Accountability Act (“HIPAA”). MedEvolve suffered a data breach affecting the protected health information (“PHI”) of 230,572 individuals, including patient names, billing addresses, telephone numbers, primary health insurers, doctors’ office account numbers, and, in some cases, Social Security numbers. The potential HIPAA violations by MedEvolve include the lack of an analysis to determine risks and vulnerabilities to electronic PHI across the organization and the failure to enter into a business associate agreement with a subcontractor. MedEvolve has agreed to pay OCR $350,000 and implement a corrective action plan.

FTC Focused on Consumer Impacts from Business Use of AI

The Federal Trade Commission (“FTC”) published guidance on its business blog on the use of AI tools, warning that the FTC is intensely focused on companies’ use of AI in ways that impact customers, including generative AI tools. The FTC’s post focused on the potential use of generative AI tools to harm or manipulate customers through the use of deceptive or unfair practices under the FTC Act.  For example, the FTC stated that companies should avoid using generative AI in ways that trick people into making harmful decisions, similar to the FTC’s prior guidance to avoid dark patterns that steer consumers to unwanted purchases or make it difficult to cancel services. The FTC also cited insertion of ads within a service with generative AI features as a potential problem area, warning that it should always be clear that an ad is an ad when interacting with generative AI. The FTC concluded with a warning that companies should not ignore their obligation to responsibly oversee use of AI, including conducting risk assessments, training staff and contractors on use of AI tools, and monitoring the actual use and impact of tools that are deployed.

Federal Court Dismisses FTC Claim against Kochava with Leave to Amend

An Idaho federal court dismissed the FTC’s claim against data broker and analytics provider Kochava Inc. (“Kochava”). The FTC had alleged that Kochava invades consumers’ privacy by selling precise geolocation data tied to unique mobile ad IDs to advertisers, including a user’s visit to sensitive locations such as reproductive health clinics and places of worship. The FTC alleged this constituted an unfair practice under the FTC Act because it enables third parties to make sensitive inferences about individuals that could lead to “stigma, discrimination, physical violence, and emotional distress” and the act of disclosing the sensitive data is an invasion of privacy in and of itself. The court disagreed that the FTC had met its burden to sufficiently allege consumer harms, stating that the facts that sensitive private information can only be ascertained by inference and that such information is otherwise generally accessible by lawful means lessen the severity of the alleged privacy injury. The court also stated that the FTC did not generally indicate how many device users suffered privacy intrusions, which is important to establish how substantial consumer injury may be. The ruling allows the FTC to return with an amended complaint alleging additional facts relating to consumer injury.

FTC Accuses Meta of Privacy Abuse and Moves to Prohibit Profiting Data of Minors

The FTC has moved to expand its 2020 privacy order (“2020 Order”) against Meta, alleging that Meta failed to comply with the 2020 Order, misled parents about their ability to control with whom their children communicated through its Messenger Kids app, and misrepresented the access it provided some app developers to private user data. As part of the changes to the 2020 Order, Meta is prohibited from profiting from data it collects, including through its virtual reality products, from users under the age of eighteen, or otherwise using such data for commercial gain after users turn eighteen. It is also subject to other expanded limitations, including being prohibited from launching new or modified products, services, or features without written confirmation from an independent assessor that its privacy program is fully compliant with the updated order and being required to disclose and obtain users’ affirmative consent for any future uses of facial recognition technology.

INTERNATIONAL LAWS & REGULATIONS

MEPs Adopt Resolution Opposing EU-U.S. Data Privacy Framework

Members of the European Parliament (“MEPs”) adopted a resolution arguing that the European Commission should not grand the United States an adequacy decision for transfers of personal data under the proposed EU-U.S. Data Privacy Framework (“DPF”). The resolution states that the DPF is an improvement on previous frameworks but is still not adequate because it does not provide sufficient safeguards. For example, the MEPs state that the framework still allows for bulk collection of personal data in certain cases, does not make bulk data collection subject to independent prior authorization, and does not provide for clear rules on data retention, among other alleged deficiencies. The MEPs argue that the framework needs to be improved to ensure it will survive legal challenges that are certain to be brought challenging the legality of the DPF. The European Commission is in the process of adopting an adequacy decision for personal data transfers based on the DPF.

EU AI Act Continues to Move Forward

The European Parliament’s Civil Liberties and Internal Market committees jointly adopted the proposed text of the AI Act. The AI Act will now be considered by the entire European Parliament, with a vote tentatively scheduled for June 14, 2023. Once formally adopted by the European Parliament, the AI Act will enter the last phase of the legislative process with trilateral negotiations between the European Parliament, the EU Council, and the EU Commission. If adopted as widely expected, the EU AI Act would be the first comprehensive AI law in the world. It would provide a risk-based level of regulation to AI applications, banning systems that create an unacceptable risk, creating specific regulatory requirements for high-risk applications, and leaving other AI applications largely unregulated.

Canadian Privacy Commissioner Publishes Guidance on Workplace Privacy

The Office of the Privacy Commissioner of Canada (“OPC”) published new guidance on workplace privacy for employers that are subject to Canada’s federal privacy legislation. The guidance discusses key privacy considerations for the use of employee personal information in the workplace and best practices with respect to employee monitoring. The OPC states that employee monitoring must be limited to purposes that are specific, targeted, and appropriate in the circumstances, ensuring that the least privacy-invasive measure in the circumstances is used, and that transparency about employee monitoring is fundamental. The guidance provides that employers must make employees aware of the purpose, nature, extent, and reasons for monitoring, as well as potential consequences for workers. The guidance also provides eight practical tips for employers when designing employee monitoring programs or engaging in other employee personal information collection, including conducting privacy impact assessments.

CJEU Holds Mere Infringement of GDPR Does Not Give Rise to Right of Compensation

The Court of Justice of the European Union (“CJEU”) held that not every infringement of the General Data Protection Regulation (“GDPR”) is sufficient to support a data subject claim for compensation. The CJEU issued its decision in the Österreichische Post case (C-300/21), in which the Austrian Post collected data on Austrian residents and used an algorithm to categorize individuals into target groups for sale to advertisers. In addition to an infringement of the GDPR, material or non-material damage resulting from that infringement and a causal link between the damage and the infringement is required to state a claim for compensation. The CJEU further held that damage need not reach some level of seriousness to be viable, and that each applicable law must provide the criteria for determining the extent of compensation so that “full and effective compensation for the damage suffered” is ensured.

Meta Fined €1.2 Billion by Ireland’s Data Protection Commission

Ireland’s Data Protection Commission (“DPC”) fined Meta €1.2 billion for violating the European Union’s General Data Protection Regulation (“GDPR”). This fine is the largest GDPR penalty to date. According to the DPC, Meta infringed the GDPR by transferring EU Facebook user data to the U.S. without the required safeguards in place. The DPC ordered Meta to bring its operations into compliance with the GDPR by stopping the transfer of personal data of Facebook users within five months. Meta stated it will appeal the decision and seek a stay with the courts.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins