STATE & LOCAL LAWS & REGULATIONS
Virginia Legislature Passes Bill Regulating High-risk AI: The Virginia legislature passed HB 2094, the High-Risk Artificial Intelligence Developer and Deployer Act (the “Act”). Using a similar approach to the Colorado AI Act passed in 2023 and California’s proposed regulations for automated decision-making technology, the Act defines “high-risk AI systems” as AI systems that make consequential decisions, which are decisions that have material legal or similarly significant effects on a consumer’s ability to obtain things such as housing, healthcare services, financial services, access to employment, and education. The Act would require developers to use reasonable care to prevent algorithmic discrimination and to provide detailed documentation on an AI system’s purpose, limitations, and risk mitigation measures. Deployers of AI systems would be required to implement risk management policies, conduct impact assessments before deploying high-risk AI systems, disclose AI system use to consumers, and provide opportunities for correction and appeal. The bill is currently with Virginia Governor Glenn Youngkin, and it is unclear if he will sign it.
Connecticut Introduces AI Bill: After an effort to pass AI legislation stalled last year in the Connecticut House of Representatives, another AI bill was introduced in the Connecticut Senate in February. SB-2 would establish regulations for the development, integration, and deployment of high-risk AI systems designed to prevent algorithmic discrimination and promote transparency and accountability. SB-2 would specifically regulate high-risk AI systems, defined as AI systems making consequential decisions affecting areas like employment, education, and healthcare. The bill includes similar requirements as the Connecticut AI bill considered in 2024 and would require developers to use reasonable care to prevent algorithmic discrimination and provide documentation on an AI system’s purpose, limitations, and risk mitigation measures. Deployers of high-risk AI systems would be required to implement risk management policies, conduct impact assessments before deployment of high-risk AI systems, disclose AI system use to consumers, and provide opportunities for appeal and correction.
New York Governor Signs Several Privacy Bills: New York Governor Kathy Hochul signed a series of bills expanding compliance obligations for social media platforms, debt collectors who use social media platforms, and dating applications. Senate Bill 895B—effective 180 days after becoming law—requires social media platforms operating in New York to post terms of service explaining how users may flag content they believe violates the platform’s terms. Senate Bill 5703B—effective immediately—prohibits the use of social media platforms for debt collection purposes. Senate Bill 2376B—effective 90 days after becoming law—expands the scope of New York’s identity theft protection law by including in its scope the theft of medical and health insurance information. Finally, Senate Bill 1759B—effective 60 days after becoming law—requires online dating services to notify individuals who were contacted by members who were banned for using a false identity, providing them with specific information to help users prevent being defrauded. Importantly, the New York Health Information Privacy Act, which would significantly expand the obligations of businesses that may collect broadly defined “health information” through their websites, has not yet been signed.
California Reintroduces Bill Requiring Browser-Based Opt-Out Preference Signals: For the second year in a row, the California Legislature has introduced a bill requiring browsers and mobile operating systems to provide a setting that enables a consumer to send an opt-out preference signal to businesses with which the consumer interacts through the browser or mobile operating system. The California Consumer Privacy Act, as amended by the California Privacy Rights Act (“CCPA”), provides California residents with the ability to opt out of the sale or sharing of their personal data, including through an opt-out preference signal. AB 566 would amend the CCPA to ensure that consumers have the ability to do so. AB 566 requires the opt-out preference signal setting to be easy for a reasonable person to locate and configure. The bill further gives the California Privacy Protection Agency (“CPPA”), the agency charged with enforcing the CCPA, the authority to adopt regulations to implement and administer the bill. The CPPA has sponsored AB 566.
Virginia Senate Passes Amendments to Virginia Consumer Protection Act: Virginia's Senate Bill 1023 (“SB 1023”) amends the Virginia Consumer Data Protection Act by banning the sale of precise geolocation data. The bill defines precise location data as anything that can locate a person within 1,750 feet. Introduced by Democratic State Senator Russet Perry, the bill has garnered bipartisan support in the Virginia Senate, passing with a 35-5 vote on February 4, 2025. Perry stated that the type of data the bill intends to ban has been used to target people in domestic violence and stalking cases, as well as for scams.
Task Force Publishes Recommendations for Improvement of Colorado AI Act: The Colorado Artificial Intelligence Impact Task Force published its Report of Recommendations for Improvement of the Colorado AI Act. The Act, which was signed into law in May 2024, has faced significant pushback from a broad range of interest groups regarding ambiguity in its definitions, scope, and obligations. The Report is designed to help lawmakers identify and implement amendments to the Act prior to its February 1, 2026, effective date. The Report does not provide substantive recommendations regarding content but instead categorizes topics of potential changes based on how likely they are to receive consensus. The report identified four topics in which consensus “appears achievable with additional time,” four topics where “achieving consensus likely depends on whether and how to implement changes to multiple interconnected sections,” and seven topics facing “firm disagreement on approach where creativity will be needed.” These topics range from key definitions under the Act to the scope of its application and exemptions.
AI Legislation on Kids Privacy and Bias Introduced in California: California Assembly Member Bauer-Kahan introduced yet another California bill targeting Artificial Intelligence (“AI”). The Leading Ethical AI Development for Kids Act (“LEAD Act”) would establish the LEAD for Kids Standards Board in the Government Operations Agency. The Board would then be required to adopt regulations governing—among other things—the criteria for conducting risk assessments for “covered products.” Covered products include an artificial intelligence system that is intended to, or highly likely to, be used by children. The Act would also require covered developers to conduct and submit risk assessments to the board. Finally, the Act would authorize a private right of action for parents and guardians of children to recover actual damages resulting from breaches of the law.
FEDERAL LAWS & REGULATIONS
House Committee Working Group Organized to Discuss Federal Privacy Law: Congressman Brett Guthrie, Chairman of the House Committee on Energy and Commerce (the “Committee”), and Congressman John Joyce, M.D., Vice Chairman of the Committee, announced the establishment of a working group to explore comprehensive data privacy legislation. The working group is made up entirely of Republican members and is the first action in this new Congressional session on comprehensive data privacy legislation.
Kids Off Social Media Act Advances to Senate Floor: The Senate Commerce Committee advanced the Kids Off Social Media Act. The Act would prohibit social media platforms from allowing children under 13 to create accounts, prohibit platforms from algorithmically recommending content to teens under 17, and require schools to limit social media use on their networks as a condition of receiving certain funding. The Act is facing significant pushback from digital rights groups, including the Electronic Frontier Foundation and the American Civil Liberties Union, which claim that the Act would violate the First Amendment.
Business Groups Oppose Proposed Updates to HIPAA Security Rule: As previously reported, the U.S. Department of Health and Human Services (“HHS”) Office for Civil Rights (“OCR”) issued a Notice of Proposed Rulemaking (“NPRM”) to amend the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Security Rule to strengthen cybersecurity protections for electronic protected health information (“ePHI”). See Blank Rome’s Client Alert on the proposed rule. A coalition of business groups, including the College of Healthcare Information Management Executives, America’s Essential Hospitals, American Health Care Association, Association of American Medical Colleges, Federation of American Hospitals, Health Innovation Alliance, Medical Group Management Association and National Center for Assisted Living, have written to President Trump and HHS Secretary Robert F. Kennedy, Jr. opposing the proposed rule. The business groups argue that the proposed rule imposes great financial burdens on the healthcare sector, including on rural hospitals, which would divert attention and funds away from other critical areas. The business groups also argue that the proposed rule contradicts Public Law 116-321, which explicitly requires HHS to consider a regulated entity’s adoption of recognized security practices when enforcing the HIPAA Security Rule, by not addressing or incorporating this legal requirement.
National Artificial Intelligence Advisory Committee Adopts List of 10 AI Priorities: The National Artificial Intelligence Advisory Committee (“NAIC”), which was established under the 2020 National Artificial Intelligence Initiative Act, approved a draft report for the Trump administration with 10 recommendations to address AI policy issues. The recommendations cover AI issues in employment, AI awareness and literacy, and AI in education, science, health, government, and law enforcement, as well as recommendations for empowering small businesses and AI governance and supporting AI innovation in a way that would benefit Americans.
CFPB Acting Director Instructs Agency Staff to Stop Work: Consumer Financial Protection Bureau (“CFPB”) Acting Director Russel Vought instructed agency staff to “stand down” and refrain from doing any work. The communication to CFPB employees followed an instruction to suspend regulatory activities and halt CFPB rulemaking. Vought also suspended CFPB’s supervision and examination activities. This freeze would impact the CFPB’s rule on its oversight of digital payment apps as well as the CFPB’s privacy rule that created a right of data portability for customers of financial institutions.
U.S. LITIGATION
First Washington My Health My Data Lawsuit Filed: Amazon is facing a class action lawsuit alleging violations of Washington's My Health My Data Act (“MHMDA”), along with federal wiretap laws and state privacy laws. The suit is the first one brought under MHMDA’s private right of action and centers on Amazon's software development kit (“SDK”) embedded in third-party mobile apps. The plaintiff’s complaint alleges Amazon collected location data of users without their consent for targeted advertising. The complaint also alleges that the SDK collected time-stamped location data, mobile advertising IDs, and other information that could reveal sensitive health details. According to the lawsuit, this data could expose insights into a user's health status, such as visits to healthcare facilities or health behaviors, without users knowing Amazon was also obtaining and monetizing this data. The lawsuit seeks injunctive relief, damages, and disgorgement of profits related to the alleged unlawful behavior. The outcome could clarify how broadly courts interpret "consumer health data" under the MHMDA.
NetChoice Files Lawsuit to Challenge Maryland Age-Appropriate Design Act: NetChoice—a tech industry group—filed a complaint in federal court in Maryland challenging the Maryland Age-Appropriate Design Code Act as violating the First Amendment. The Act was signed into law in May and became effective in October 2024. It requires online services that are likely to be accessed by children under the age of 18 to provide enhanced safeguards for, and limit the collection of data from, minors. In its Complaint, NetChoice alleges that the Act will not meaningfully improve online safety and will burden online platforms with the “impossible choice” of either proactively censoring categories of constitutionally protected speech or implementing privacy-invasive age verification systems that create serious cybersecurity risks. NetChoice has been active in challenging similar Acts across the country, including in California, where it has successfully delayed the implementation of the eponymous California Age-Appropriate Design Code Act.
Kochava Settles Privacy Class Action; Unable to Dismiss FTC Lawsuit: Kochava Inc. (“Kochava”), a mobile app analytics provider and data broker, has settled the class action lawsuits alleging Kochava collected and sold precise geolocation data of consumers that originated from mobile applications. The settlement requires Kochava to pay damages of up to $17,500 for the lead plaintiffs and attorneys’ fees of up to $1.5 million. Among other changes to its privacy practices Kochava must make, the settlement requires Kochava to implement a feature aimed at blocking the sharing or use of raw location data associated with health care facilities, schools, jails, and other sensitive venues. Relatedly, U.S. District Judge B. Lynn Winmill of the District of Idaho denied Kochava’s motion to dismiss the lawsuit brought by the Federal Trade Commission (“FTC”) for Kochava’s alleged violations of Section 5 of the FTC Act. The FTC alleges that Kochava’s data practices are unfair and deceptive under Section 5 of the FTC Act, as it sells the sensitive personal information collected through its Mobile Advertising ID system (“MAIDs”) to its customers, providing customers a “360-degree perspective” on consumers’ behavior through subscriptions to its data feeds, without the consumer’s knowledge or consent. In the order denying Kochava’s motion to dismiss, Winmill rejected Kochava’s argument that Section 5 of the FTC Act is limited to tangible injuries and wrote that the “FTC has plausibly pled that Kochava’s practices are unfair within the meaning of the FTC Act.”
Texas District Court Blocks Enforcement of Texas SCOPE Act: The U.S. District Court for the Western District of Texas (“Texas District Court”) granted a preliminary injunction blocking enforcement of Texas’ Securing Children Online through Parental Empowerment Act (“SCOPE Act”). The SCOPE Act requires digital service providers to protect children under 18 from harmful content and data collection practices. In Students Engaged in Advancing Texas v. Paxton, plaintiffs sued the Texas Attorney General to block enforcement of the SCOPE Act, arguing the law is an unconstitutional restriction of free speech. The Texas District Court ruled that the SCOPE Act is a content-based statute subject to strict scrutiny, and that with respect to certain of the SCOPE Act’s monitoring-and-filtering, targeted advertising and content monitoring and age-verification requirements, the law’s restrictions on speech failed strict scrutiny and should be facially invalidated. Accordingly, the Texas District Court issued a preliminary injunction halting the enforcement of such provisions. The remaining provisions of the law remain in effect.
California Attorney General Agrees to Narrowing of Its Social Media Law: The California Attorney General has agreed to not enforce certain parts of AB 587, now codified in the Business & Professions Code, sections 22675-22681, which set forth content moderation requirements for social media platforms (the “Social Media Law”). X Corp. (“X”) filed suit against the California Attorney General, alleging that the Social Media Law was unconstitutional, censoring speech based on what the state sees as objectionable. While the U.S. District Court for the Eastern District of California (“California District Court”) initially denied X’s request for a preliminary injunction to block the California Attorney General from enforcing the Social Media Law, the Ninth Circuit overturned that decision, holding that certain provisions of the law regarding extreme content failed the strict-scrutiny test for content-based restrictions on speech, violating the First Amendment. X and the California Attorney General have asked the California District Court to enter a final judgment based on the Ninth Circuit decision. The California Attorney General has also agreed to pay $345,576 in attorney fees and costs.
U.S. ENFORCEMENT
Arkansas Attorney General Sues Automaker over Data Privacy Practices: Arkansas Attorney General Tim Griffin announced that his office filed a lawsuit against General Motors (“GM”) and its subsidiary OnStar for allegedly deceiving Arkansans and selling data collected through OnStar from more than 100,000 Arkansas drivers’ vehicles to third parties, who then sold the data to insurance companies that used the data to deny insurance coverage and increase rates. The lawsuit alleges that GM advertised OnStar as offering the benefits of better driving, safety, and operability of its vehicles, but violated the Arkansas Deceptive Trade Practices Act by misleading consumers about how driving data was used. The lawsuit was filed in the Circuit Court of Phillips County, Arkansas.
Healthcare Companies Settle FCA Claims over Cybersecurity Requirements: Health Net and its parent company, Centene Corp. (collectively, “Health Net”), have settled with the United States Department of Justice (“DOJ”) for allegations that Health Net falsely certified compliance with cybersecurity requirements under a U.S. Department of Defense contract. Health Net had contracted with the Defense Health Agency of the U.S. Department of Defense (“DHA”) to provide managed healthcare support services for DHA’s TRICARE health benefits program. The DOJ alleged that Health Net failed to comply with its contractual obligations to implement and maintain certain federal cybersecurity and privacy controls. The DOJ alleged that Health Net violated the False Claims Act by falsely stating its compliance in related annual certifications to the DHA. The DOJ further alleged that Health Net ignored reports from internal and third-party auditors about cybersecurity risks on its systems and networks. Under the settlement, Health Net must pay the DOJ and DHA $11.25 million.
Eyewear Provider Fined $1.5M for HIPAA Violations: The U.S. Department of Health and Human Services (“HHS”), Office for Civil Rights (“OCR”) imposed a $1,500,000 civil money penalty against Warby Parker for violations of the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Security Rule. The penalty resulted from a cyberattack involving unauthorized access to customer accounts, affecting nearly 200,000 individuals. An OCR investigation resulted from a 2018 security incident. Between September 25, 2018, and November 30, 2018, third parties accessed customer accounts using usernames and passwords obtained from breaches of other websites, a method known as “credential stuffing.” The compromised data included names, addresses, email addresses, payment card information, and eyewear prescriptions. OCR found that Warby Parker failed to conduct an accurate risk analysis, implement sufficient security measures, and regularly review information system activity.
CPPA Finalizes Sixth Data Broker Registration Enforcement Action: The California Privacy Protection Agency announced that it is seeking a $46,000 penalty against Jerico Pictures, Inc., d/b/a National Public Data, a Florida-based data broker, for allegedly failing to register and pay an annual fee as required by the California Delete Act. The Delete Act requires data brokers to register and pay an annual fee that funds the California Data Broker Registry. This action comes following a 2024 data breach in which National Public Data reportedly exposed 2.9 billion records, including names and Social Security Numbers. This is the sixth action taken by the CPPA against data brokers, with the first five actions resulting in settlements.
INTERNATIONAL LAWS & REGULATIONS
First EU AI Act Provisions Become Effective; Guidelines on Prohibited AI Adopted: The first EU AI Act (the “Act”) provisions to become effective came into force on February 2, 2025. The Act’s provisions prohibiting certain types of AI systems deemed to pose an unacceptable risk and rules on AI literacy are now applicable in the EU. Prohibited AI systems are those that present unacceptable risks to the fundamental rights and freedoms of individuals and include social scoring for public and private purposes, exploitation of vulnerable individuals with subliminal techniques, biometric categorization of natural persons based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation, and emotion recognition in the workplace and education institutions, unless for medical or safety reasons, among other uses. The new AI literacy obligations will require organizations to put in place robust AI training programs to ensure a sufficient level of AI literacy for their staff and other persons working with AI systems. Certain obligations related to general-purpose AI models will become effective August 2, 2025. Most other obligations under the Act will become effective August 2, 2026.
UK Introduces AI Cyber Code of Practice: The UK government has introduced a voluntary Code of Practice to address cybersecurity risks in AI systems, with the aim of establishing a global standard via the European Telecommunications Standards Institute (“ETSI”). This code is deemed necessary due to the unique security risks associated with AI, such as data poisoning and prompt injection. It offers baseline security requirements for stakeholders in the AI supply chain, emphasizing secure design, development, deployment, maintenance, and end-of-life. The Code of Practice is intended as an addendum to the Software Code of Practice. It provides guidelines for developers, system operators, data custodians, end-users, and affected entities involved in AI systems. Principles within the code include raising awareness of AI security threats, designing AI systems for security, evaluating and managing risks, and enabling human responsibility for AI systems. The code also emphasizes the importance of documenting data, models, and prompts, as well as conducting appropriate testing and evaluation.
CJEU Issues Opinion on Pseudonymized Data: The Court of Justice of the European Union (“CJEU”) issued a decision in a case involving an appeal by the European Data Protection Supervisor (“EDPS”) against a General Court decision that annulled the EDPS’s decision regarding the processing of personal data by the Single Resolution Board (“SRB”) during the resolution of Banco Popular Español SA during insolvency proceedings. The case reviewed whether data transmitted by the SRB to Deloitte constituted personal data. Personal data consisted of comments from parties interested in the proceedings that had been pseudonymized by assigning a random alphanumeric code, as well as aggregated and filtered, so that individual comments could not be distinguished within specific commentary themes. Deloitte did not have access to the codes or the original database. The court held that the data was personal data in the hands of the SRB. However, the court ruled that the EDPS was incorrect in determining that the pseudonymized data was personal data to Deloitte without analyzing whether it was reasonably possible that Deloitte could identify individuals from the data. As a takeaway, the CJEU left open the possibility that pseudonymized data could be organized and protected in such a way as to remove any reasonable possibility of re-identification with respect to a particular party, resulting in the data not constituting personal data under the GDPR.
European Commission Withdraws AI Liability Directive from Consideration; European Parliament Committee Votes to Press On: The European Commission announced it plans to withdraw the proposed EU AI Liability Directive, a draft legislation for addressing harms caused by artificial intelligence. The decision was announced in the Commission’s 2025 Work Program stating that there is no foreseeable agreement on the legislation. However, the proposed legislation has not yet been officially withdrawn. Despite the announcement, members of the European Parliament on the body’s Internal Market and Consumer Protection Committee voted to keep working on liability rules for artificial intelligence products. It remains to be seen whether the European Parliament and the EU Council can make continued progress in negotiating the proposal in the coming year.
Additional Authors: Daniel R. Saeedi, Rachel L. Schaller, Gabrielle N. Ganze, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Adam J. Landy, Amanda M. Noonan and Karen H. Shin.