Welcome to this month’s issue of The BR Privacy & Security Download, the digital newsletter of Blank Rome’s Privacy, Security, & Data Protection practice.
RECENT HIGHLIGHT
Blank Rome partners Sharon R. Klein, Alex C. Nisenbaum, and associate Karen H. Shin authored this alert discussing the state of California’s finalized a set of cybersecurity regulations, and their potential effects on the overall data privacy and security landscape, as well as the potential implications for businesses operating within the state.
STATE & LOCAL LAWS & REGULATIONS
Colorado Fails to Pass Amendments to Colorado AI Act: The Colorado Legislature failed to pass amendments to the Colorado AI Act during its special legislative session despite lobbying from technology companies concerned about burdens on small businesses developing or deploying high-risk artificial intelligence (“AI”) systems. The Colorado AI Act, originally set to take effect on February 1, 2026, defines high-risk AI systems as any AI system that makes or is a substantial factor in making a consequential decision such as employment, financial or lending services, essential government services, healthcare services, housing, or legal services. With negotiations on amendments unsuccessful, the Colorado Legislature instead passed Senate Bill 4, which delays the Colorado AI Act’s effective date to June 30, 2026. This extension gives the Colorado Legislature more time to consider industry concerns and revise the law during its regular session starting in January.
CPPA Issues Proposed Modifications to DROP Regulations: The California Privacy Protection Agency (“CPPA”) approved modifications to the proposed regulations concerning the Delete Request and Opt-Out Platform (“DROP”) mandated by the Delete Act. The Delete Act requires the CPPA to establish an accessible deletion mechanism that allows consumers to request from registered data brokers the deletion of all non-exempt personal information related to the consumer through a single deletion request to the CPPA. Modifications to the DROP regulations include requiring: (i) the CPPA to verify that deletion requests to data brokers originate from actual California residents, reducing the risk of fraudulent requests; (ii) data brokers to follow updated data standardization requirements, ensuring they consistently compare their databases with information from the DROP system for compliance; and (ii) a 100 percent match across multiple identifiers before processing deletion requests to minimize the risk of erroneous data deletions.
Colorado Attorney General Issues Notice of Proposed Rulemaking for Minor’s Privacy under Colorado Privacy Act: The Colorado Attorney General has issued a Notice of Proposed Rulemaking to amend the rules implementing the Colorado Privacy Act (“CPA”). The proposed rules are intended to clarify the recent statutory amendments to the CPA, which will become effective October 1, 2025. The proposed rules require controllers that know or willfully disregard that a user is a minor (under 18 years old) to obtain valid consent before processing the minor’s personal data or enabling design features that could increase, sustain, or extend a minor’s use of an online service. The rules clarify that common use of a design feature does not automatically mean it is safe for minors, and controllers should consult guidance from other jurisdictions when determining age knowledge standards. The amendments also expand requirements for data protection assessments to address heightened risks to minors and clarify compliance expectations for businesses processing minors’ data under the CPA.
Illinois Enacts Bill Banning Standalone AI Therapy: Illinois has enacted the Wellness and Oversight for Psychological Resources Act (“WOPRA”), becoming one of the first states to formally regulate the use of AI in therapy and psychotherapy services. The WOPRA prohibits individuals and entities from providing, advertising, or offering therapy or psychotherapy services in Illinois, including through the use of internet-based AI, to the public in Illinois unless conducted by licensed professionals. AI systems, including mental health chatbots, cannot make independent therapeutic decisions, directly interact with clients in any form of therapeutic communication, generate therapeutic recommendations or treatment plans without professional review, or detect emotions or mental states. Licensed professionals may use AI only for administrative tasks (e.g., scheduling, billing) and supplementary support (e.g., documentation, data analysis) with written patient consent. Violations can incur civil penalties up to $10,000 per incident. The WOPRA excludes religious counseling, peer support, and self-help resources. The WORPA became effective on August 1, 2025.
CPPA Released Blog on Consumer Rights: The CPPA has published a three-part blog series titled “LOCKED!” (an acronym for limit, opt-out, correct, know, equal treatment, and delete). The blog series explains the rights afforded to California residents under the California Consumer Privacy Act (“CCPA”), as amended by the California Privacy Rights Act. The blog posts remind California residents to review a business’ privacy policy to find the links to exercise their rights and provide a link for California residents to submit a complaint to the CPPA if they feel their rights have been violated.
FEDERAL LAWS & REGULATIONS
HHS OCR Publishes New and Updated HIPAA Privacy Rule Guidance: The U.S. Department of Health and Human Services (“HHS”) Office for Civil Rights (“OCR”) has published new and updated guidance on certain aspects of the Health Insurance Portability and Accountability Act (“HIPAA”) Privacy Rule. HHS OCR published a new FAQ on permitted disclosures of protected health information (“PHI”) to value-based care arrangements, such as accountable care organizations. The new FAQ clarifies that a patient is not required to give their authorization before a covered healthcare provider can disclose PHI for the treatment activities of another healthcare provider, as long as both providers are treating the patient through a value-based care arrangement. HHS OCR also updated its FAQ on what PHI individuals have a right under HIPAA to access from their healthcare providers and health plans. The updated FAQ now explicitly includes consent forms for treatment in the list of health records that an individual has a right to access.
Federal Court Filing System Experiences Data Breach: The federal judiciary disclosed that it experienced a cyberattack on its case management system, PACER. The federal judiciary announced that while most documents on PACER are public, some filings contain confidential or proprietary information that have been targeted by hackers. The federal judiciary stated in an announcement that in response to the cyberattack, it is taking additional steps to strengthen protections for sensitive case documents, including by implementing more rigorous procedures to restrict access to sensitive documents under carefully controlled and monitored circumstances. The announcement did not disclose details about the timing or frequency of the cyberattack.
U.S. LITIGATION
Jury Finds Meta Liable for CIPA Violations: A California federal jury found that Meta’s collection of sensitive health data via a software development kit implemented in the popular menstrual cycle tracking app, Flo, violated California’s Invasion of Privacy Act (“CIPA”). This matter began in 2021 as a class action brought by Flo users against Flo Health, Google, and Meta, alleging that Flo shared user data with third-party advertisers via advertising tracking technologies implemented within the Flo app and website. Flo Health left Meta as the sole remaining defendant when it settled with plaintiffs the day before closing arguments. Following the verdict, Meta responded with a pair of post-trial motions seeking to decertify the class and overturn the verdict. Meta emphasized that its tools do not “record” conversations, as prohibited under CIPA, and that it simply provides code that advertisers can choose to add to their app or website. Critically, Meta emphasized that in order to implement these tools, the apps/website must commit to not send Meta any sensitive information, including health information.
Second Circuit Upholds Dismissal of VPPA Claim Holding Pixel Data not Identifiable to Ordinary Person: The Second Circuit Court of Appeals upheld the dismissal of a class action lawsuit against Flipps Media (“Flipps”), which was accused of violating the Video Privacy Protection Act (“VPPA”) by using the Meta Pixel to transmit users’ video viewing data to Meta. The Court ruled that the transmitted data—URLs and Facebook ID numbers embedded in code—could not be interpreted by an “ordinary person” to identify specific video-watching behavior. This ruling aligns with the Third and Ninth Circuits, which also apply the “ordinary person” standard, contrasting with the First Circuit’s broader “reasonable foreseeability” standard. The Court emphasized that VPPA liability hinges on what the disclosing party provides, not what a tech-savvy recipient might infer. The decision is a win for digital media companies, as it could make it easier to successfully dismiss VPPA claims early in litigation. The plaintiff’s request for an en banc review was denied on July 28, 2025, and a petition to the U.S. Supreme Court may follow.
D.C. Circuit Affirms FCC Fines: The D.C. Circuit Court of Appeals upheld a $92 million fine imposed by the Federal Communications Commission (“FCC”) on T-Mobile and Sprint for selling users’ sensitive location data to third-party aggregators without consent. The Court rejected the companies’ argument that the facts didn’t constitute a legal violation, emphasizing the egregiousness of their conduct and their capacity to pay substantial penalties. The companies had paid the fines and appealed directly to the D.C. Circuit, claiming they were denied a jury trial. However, the Court ruled that they voluntarily surrendered that right by not waiting for an enforcement suit from the FCC, which would have allowed for a trial. The fines, issued in April 2024, were part of a broader FCC enforcement action totaling nearly $200 million, also affecting AT&T and Verizon. Verizon is appealing in the Second Circuit. AT&T successfully overturned its fine in the Fifth Circuit.
Federal Court Upholds FCC Data Breach Rule: The Sixth Circuit upheld the FCC’s 2024 expanded data breach notification rules for telecommunications carriers. The 2024 rules expanded the definition of breach to include inadvertent access, use, or disclosure of customer information, expanded the definition of covered information to include personally identifiable information rather than only propriety network information, and require notification of law enforcement and the FCC in the event that a breach impacts and poses a risk of harm to more than 500 individuals. Industry groups challenging the rules, arguing that they were too close to a 2016 FCC order that was specifically rejected by Congress under the Congressional Review Act in 2017. The Sixth Circuit rejected this argument, finding that the 2016 order that Congress disapproved of was far more expansive than the 2024 rules, imposing a broad array of privacy rules on broadband Internet access. Congress disapproved of these expansive rules as a whole, not as individual components. Further, the Court found that there were sufficient minor differences between the reporting requirements in the 2016 and 2024 rules that the rules were not substantially the same.
Federal Court Finds West Virginia’s “Daniel’s Law” Unconstitutional: The U.S. District Court for the Northern District of West Virginia ruled West Virginia’s Daniel’s violates the First Amendment. The law, modeled after New Jersey’s Daniel’s Law, is intended to protect judges and law enforcement officers by prohibiting the disclosure of their home addresses and phone numbers without consent. The Court applied strict scrutiny, finding the law to be a content-based restriction on truthful, noncommercial speech. The Court held that the law failed the narrow tailoring requirement because it lacked a notice provision and a knowledge (scienter) requirement, which are essential to avoid chilling protected speech. The decision diverged from previous rulings in New Jersey, where courts upheld similar laws, a less stringent constitutional test. This ruling could have national implications, as many states have enacted similar laws since 2021 following the tragic murder of Daniel Anderl, son of Judge Esther Salas. The case is expected to be appealed to the Fourth Circuit, and may influence pending litigation in the Third Circuit involving New Jersey’s version of the law.
California Anti Deepfake Law Struck Down: A California federal judge struck down a California anti-deepfake law as violating the first amendment. Assembly Bill 2655 was designed to combat AI-generated deepfakes and other digitally manipulated content in elections. It would have required large platforms to label or remove such content during political campaigns and provide a means for California residents to report materially deceptive content. A group of social media platforms, including X, and media outlets alleged that the law “authorizes the government to substitute its judgement for those of the platforms.” They further argued that the law violated constitutional free press protections by imposing burdens on some platforms that would not apply to other outlets. The state argued that the law was more limited in scope and narrowly tailored to address documented issues of impersonating candidates and deepfake images of political figures.
NetChoice Sues to Invalidate Colorado Law Requiring Social Media Warnings for Minors: Technology industry trade associate NetChoice filed a lawsuit in Colorado federal court challenging Colorado H.B. 24-1136. The law, set to take effect on January 1, 2026, requires social media platforms to display warning messages to minors about the potential mental and physical health impacts of social media use. The law also mandates notifications to minor users every 30 minutes if a minor has spent over an hour on social media in a 24-hour period or is using it between 10 p.m. and 6 a.m. NetChoice argues that the law constitutes compelled government speech, violating the First Amendment, because it forces private companies to deliver a message on a controversial topic without consensus. The group claims this coerces platforms into making statements that could be used against them in litigation, regardless of whether they believe the statements are accurate. NetChoice also contends the law is unconstitutionally vague, both in its definition of “social media platform” and in the required content of the warnings.
Justices Uphold Mississippi Age Verification Law: The U.S. Supreme Court upheld a Fifth Circuit stay of a district court’s preliminary injunction filed by NetChoice challenging Mississippi’s HB 1126 (“the Act”). The Mississippi legislature passed HB 1126 in 2024 to protect minor children from exposure to harmful online content and regulate the collection and processing of children’s data. The Act applies to digital services providers that allow users to create profiles, socially interact, and post content. The law requires these providers to verify the age of users, obtain parental consent for minors, limit the collection of data relating to minors, and restrict the spread of harmful material to children. NetChoice argues that the law unconstitutionally restricts social media users of all ages’ access to protected speech by forcing them to provide some level of personal information before accessing the platforms. It further argues that the parental-consent provisions are another unlawful restriction on access to protected speech for minor users. While there were no dissents from the ruling, Justice Brett Kavanaugh wrote a concurrence in which he opined that NetChoice will likely succeed in its challenge, but that it had failed to demonstrate that it was entitled to the emergency relief sought.
Dental Practice Management Company Settles CIPA class action: Aspen Dental has agreed to pay $18.7 million to settle a class action alleging it violated California’s Invasion of Privacy Act by using Google and Meta tracking pixels on its dental management website. Plaintiffs allege that these pixels collected and shared sensitive patient information entered on the website with third-party advertisers without their consent in violation of CIPA. Aspen Dental has not admitted any wrongdoing but has agreed to a cash payment to settle claims from certain individuals who booked appointments through its website between February 2022 and January 1, 2025.
Healthcare System Settles Class Action Alleging Wrongful Disclosure of Patient Information via Website Pixels: BJP HealthCare (“BJP”) has agreed to pay $5.5 million to settle a website tracking pixel class action. Plaintiffs allege that BJP implemented tracking pixels, including those from Meta and Google, on a pair of patient portal websites without plaintiff’s knowledge or consent. Patients used these portals to upload sensitive health information and communicate with medical professionals. The Settlement Class includes all individuals who, between June 2017 and August 2022, used BJC’s MyChart patient portal.
U.S. ENFORCEMENT
CPPA Asks Court to Enforce Investigative Subpoena: The CPPA initiated a legal action to force Tractor Supply Co. to comply with an investigative subpoena seeking information about its compliance with the CCPA. The CPPA initiated a probe into Tractor Supply Co. following a consumer complaint, which it escalated in January 2025 by issuing a request for interrogatories seeking basic facts about the company’s privacy practices. The company objected to the CPPA’s interrogatories as overbroad logistically burdensome and refused to provide answers about its business relating to the period before January 1, 2023. A representative of the CPPA stated that it had spent months trying to resolve the dispute through meetings, phone calls, and letters – to no avail. This is the first time the CPPA has publicly disclosed an ongoing investigation or launched a judicial action to enforce an investigative request. This matter may provide valuable insight for companies in determining the scope of their CCPA compliance record keeping obligations.
FTC Settles AI-Scam Claims with E-Commerce Firms: The FTC announced that it had entered into a settlement with the owner of a business opportunity scheme that allegedly deceptively guaranteed consumers income through operating online store fronts using AI-powered software. As part of the settlement, a network of e-commerce coaching firms owned by defendant Bratislav Rozenfeld will pay more than $15 million in penalties and will be permanently barred from promoting or selling any business opportunities. The case is part of the FTC’s Operation AI Comply, which is aimed at combating deceptive and harmful uses of AI in the marketplace.
Defense Contractor and Private Equity Firm Settle False Claims Act Action Arising from Voluntary Self-Disclosure of Cybersecurity Violations: The Department of Justice (“DOJ”) announced that defense contractor Aero Turbine Inc. (“Aero Turbine”) and Gallant Capital Partners LLC, a private equity firm, agreed to pay $1.75 million to settle allegations under the False Claims Act stemming from their failure to comply with cybersecurity requirements in a contract with the Department of the Air Force. Between January 2018 and February 2020, Aero Turbine allegedly did not implement key cybersecurity controls outlined in NIST SP 800-171, potentially exposing sensitive defense information. Additionally, in mid-2019, both companies allegedly shared sensitive files with a software firm in Egypt, whose personnel were not authorized to access such data under the contract terms. The DOJ acknowledged that Aero Turbine and Gallant took significant remedial steps: they self-disclosed the issues, cooperated with the investigation, and acted promptly to address the problems.
Bipartisan Coalition of State Attorneys General Call on Social Media Company to Strengthen Location Privacy: A bipartisan coalition of 37 attorneys general wrote a letter to Instagram urging the company to revise its newly launched location-sharing feature, which displays users’ precise locations on a map. The coalition expressed serious concerns about public safety and data privacy, especially for children and survivors of domestic violence, warning that such features could be exploited by predators and stalkers. In the letter, the attorneys general urged Instagram to ensure that minors not be allowed to enable location-sharing; send adult users clear alerts explaining the feature, its risks, and how Instagram will use their location data; and that users be given simple, accessible controls to disable location sharing at any time. The letter was co-signed by attorneys general from states including California, Texas, New York, Florida, Illinois, and Virginia, among others. The initiative reflects growing bipartisan concern over tech platform accountability and digital safety standards.
NYDFS Settles with Dental Insurance Provider Over Cybersecurity Violations: Healthplex, Inc. (“Healthplex”), a dental insurance management provider, has agreed to pay a $2 million penalty to settle with the New York Department of Financial Services (“NYDFS”) over alleged failures to comply with the NYDFS’s cybersecurity regulation. The settlement follows a 2021 phishing attack in which a customer service employee clicked on a malicious email, allowing threat actors to access sensitive consumer data stored in the employee’s email account. NYDFS’s investigation revealed that Healthplex lacked a data retention policy to limit stored emails, did not use multi-factor authentication (“MFA”) for email, and delayed its breach notification. Healthplex waited over four months to report the incident, far exceeding the regulation’s 72-hour requirement. As part of the settlement, Healthplex must hire an independent auditor to assess its MFA controls.
HHS with Public Accounting and Consulting Firm over Alleged HIPAA Violations: HHS OCR announced a settlement with BST & Co. CPAs, LLP (“BST”), an accounting and consulting firm, over violations of the HIPAA Security Rule following a 2019 ransomware attack. BST, acting as a HIPAA business associate, had access to protected health information from a covered entity customer. OCR’s investigation revealed that BST failed to conduct a thorough risk analysis to identify vulnerabilities in its systems that store electronic protected health information. As part of the settlement, BST agreed to pay $175,000 to OCR, implement a corrective action plan to be monitored for two years, conduct a comprehensive risk analysis, develop and maintain HIPAA-compliant policies and procedures, and enhance its HIPAA training program for all relevant staff. This marks OCR’s 15th ransomware enforcement action and the 10th under its Risk Analysis Initiative, focused on ensuring covered entities and business associates comply with their risk analysis and risk management obligations under the HIPAA Security Rule’s administrative safeguard requirements.
INTERNATIONAL LAWS & REGULATIONS
Privacy Commissioner of Canada Publishes Guidance on Biometrics for Public and Private Sector: The Office of the Privacy Commissioner of Canada (“OPC”) has issued updated guidance for both public and private sector organizations on the responsible use of biometric technologies, such as facial recognition and fingerprint scanning. This guidance follows a public consultation held between November 2023 and February 2024, which included input from academia, civil society, businesses, legal associations, and individuals. The guidance emphasizes the need for a clear and appropriate purpose when collecting, using, or disclosing biometric data. Organizations must assess privacy risks, ensure proportionality, and implement safeguards to protect biometric information. The guidance outlines consent requirements, stresses transparency, and calls for accuracy testing of biometric systems.
UK and Canadian AI Organizations Launch International Coalition to Safeguard AI Development: The UK’s AI Security Institute has launched the Alignment Project, a global initiative to advance research in AI alignment—ensuring AI systems behave predictably and in accordance with human values. The project brings together international partners including the Canadian AI Safety Institute, Amazon Web Services, Anthropic, Schmidt Sciences, and others, with support from civil society and academia. The project will fund cutting-edge research, provide up to £5 million in cloud computing credits, and offer venture capital to accelerate commercial solutions. The Alignment Project invites governments, philanthropists, and industry to contribute through funding, infrastructure, and research collaboration. Its goal is to remove barriers to AI adoption by building trust and ensuring systems remain transparent and responsive to human oversight.
New Zealand Privacy Commissioner Announces New Biometrics Rules: The New Zealand Privacy Commissioner has introduced a Biometric Processing Privacy Code (the “Code”) that will create specific privacy rules for businesses and organizations using biometric technologies such as facial recognition. The Code aims to balance innovation with the protection of sensitive personal data while ensuring that businesses and organizations using biometric systems do so safely, transparently, and proportionately. Key requirements of the Code include mandatory assessments of whether biometric use is effective and proportionate, implementation of safeguards to reduce privacy risks, and requirements to notify individuals when biometric data is being collected. The Code prohibits intrusive uses, such as predicting emotions or inferring protected characteristics like ethnicity or sex. The Code comes into force on November 3, 2025, with a grace period until August 3, 2026 for existing biometric systems to comply. It carries the same legal weight as the New Zealand Privacy Act Information Privacy Principles and replaces them for biometric-specific applications.
Australian Information Commissioner Files Civil Penalty Proceedings Against Telecommunications Provider: The Australian Information Commissioner (“AIC”) has filed civil penalty proceedings in the Federal Court against Singtel Optus Pty Limited and Optus Systems Pty Limited (collectively “Optus”) following a data breach disclosed in September 2022. The breach involved unauthorized access to the personal information of approximately 9.5 million Australians, some of which was later released on the dark web. The AIC alleges that between October 2019 and September 2022, Optus failed to take reasonable steps to protect personal data from misuse, interference, and unauthorized access, violating the Australian Privacy Act 1988. The breach exposed sensitive data including names, birth dates, addresses, contact details, and government-issued identifiers like passport and Medicare card numbers. The case could result in a civil penalty of up to $2.22 million per contravention, with one alleged contravention per affected individual. The Court will determine whether a civil penalty is ordered, and the amount of any penalty.
Daniel R. Saeedi, Rachel L. Schaller, Ana Tagvoryan, Gabrielle N. Ganze, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Karen H. Shin, and Amanda M. Noonan contributed to this article