HB Ad Slot
HB Mobile Ad Slot
The BR Privacy & Security Download- January 2026
Thursday, January 8, 2026

STATE & LOCAL LAWS & REGULATIONS

New York Governor Signs AI Transparency and Safety Law: New York Governor Kathy Hochul signed the Responsible AI Safety and Education Act (“RAISE Act”) into law. The RAISE Act tracks closely with elements of California’s Transparency in Frontier Artificial Intelligence Act signed into law in September of 2025. The RAISE Act requires large artificial intelligence (“AI”) developers to create and publish information about their safety protocols, and report incidents to New York State within 72 hours of determining that an incident occurred. It also creates an oversight office within the New York Department of Financial Services that will assess large frontier developers and enable greater transparency. The law covers companies with more than $500 million in revenue and takes effect on January 1, 2027. The Attorney General can bring civil actions against large frontier developers for the failure to submit required reporting, or making false statements under the RAISE Act. Penalties are up to $1 million for the first violation and up to $3 million for subsequent violations.

California Attorney General Settles with Mobile Gaming App Maker for Alleged CCPA Violations: California Attorney General has settled with Jam City, Inc. (“Jam City”), a mobile app gaming company, over the company’s alleged violations of the California Consumer Privacy Act (“CCPA”). The California Attorney General alleged that Jam City collects personal information, such as device identifiers, IP addresses, and users’ interaction with a game, and discloses such information to third parties for advertising and analytics purposes, including targeted advertising. However, Jam City failed to offer consumers methods to opt-out of the sale or sharing of their personal information across 21 gaming apps and its website in violation of the CCPA. Under the settlement, Jam City must pay $1.4 million in civil penalties and provide in-app methods for consumers to opt-out of the sale or sharing of their data and must not sell or share the personal information of consumers at least 13 and less than 16 years old without their affirmative, opt-in consent.

CPPA Publishes Alert About DROP Deletion Requests: The California Privacy Protection Agency (“CPPA”) has published an alert about the launch of the Delete Request and Opt-out Platform (“DROP”) under the California Delete Act. The Delete Act, which was signed into law in October 2023, directed the CPPA to establish DROP. DROP is a privacy tool that allows California residents to send a single request to every registered data broker in California to delete the personal information they hold about the resident. Starting January 1, 2026, California residents are able to access DROP via https://privacy.ca.gov/ and submit a deletion request. Registered data brokers must start processing such requests on August 1, 2026, and process such requests every 45 days.

State Attorneys General Write Letter Pushing for AI Safeguards: 42 State Attorneys General have issued a letter to large technology companies like Google, Meta, and Microsoft about the rise in sycophantic and delusional outputs from the use of generative AI software. The letter highlights that generative AI software (e.g., chatbots) has been involved in at least six deaths in the United States, and other incidents of domestic violence, poisoning, and hospitalizations for psychosis. The letter urges such companies to adopt additional safeguards, specifically to protect children, such as: (i) maintaining policies and procedures concerning sycophantic and delusional outputs and training to all persons that provide reinforcement learning from human feedback for the generative AI models; (ii) providing clear and conspicuous, permanently viewable warnings about unintended and potentially harmful outputs that may be generated by the generative AI; (iii) prohibiting unlawful outputs for child-related accounts; and (iii) subjecting models to independent third-party audits reviewable by state and federal regulators.

Indiana Attorney General Publishes Data Consumer Bill of Rights: The Indiana Attorney General has published a Data Consumer Bill of Rights, which summarizes the rights Indiana consumers have under the Indiana Consumer Data Protection Act (“ICDPA”) and responses to frequently asked questions. Under the ICDPA, Indiana consumers have the rights to know, correct, and delete personal data and opt out of the processing of personal data for targeted advertising, sale, and profiling. Indiana consumers also have the rights to appeal a denial of a request and not be discriminated against for exercising any of the aforementioned rights. The ICDPA went into effect January 1, 2026, and applies to entities that do business in Indiana or target Indiana residents and either control or process the personal data of 100,000 or more Indiana residents or control or process the personal data of at least 25,000 Indiana residents and derive more than 50 percent of their gross revenue from the sale of personal data.

CPPA Issues Data Broker Registration Enforcement Advisory: The CPPA has released an enforcement advisory regarding compliance with California’s data broker registration law and the Delete Act. The CPPA released the enforcement advisory after observing that “certain data brokers ‘hide the ball’ from consumers by (1) doing business under multiple trade names, or DBAs, or operating multiple websites without listing those trade names and websites on their registration, or (2) pointing to a parent or affiliated entity’s registration.” The enforcement advisory makes clear that each distinct legal entity operating as a data broker must register separately as a data broker and establish its own DROP account. Businesses that operated as a data broker in the prior year must also, as part of registration, confirm the accuracy of the information that the business provided in its DROP account and provide the data broker’s name, trade names (e.g., DBAs), if applicable, and any website addresses where it provides services.   


FEDERAL LAWS & REGULATIONS

Trump Administration Issues Executive Order to Establish National AI Policy Framework: President Trump issued an Executive Order establishing a federal policy to curb what it characterizes as “excessive” and “burdensome” state AI regulation, aiming to create a “minimally burdensome national standard” and reduce a patchwork of rules that the administration argues impede innovation and U.S. competitiveness. The Executive Order directs the U.S. Attorney General to form an AI litigation task force to challenge state AI laws deemed unconstitutional or preempted, and instructs the Department of Commerce to evaluate state AI laws within 90 days to identify targets for challenge. The Federal Communications Commission (“FCC”) is ordered to consider adopting federal AI reporting and disclosure standards that would preempt conflicting state laws, while eligibility for certain broadband-related funds may be conditioned on states’ AI laws. The Executive Order also specifies narrow areas where it seeks to avoid preemption, such as children’s online safety and state procurement. The Executive Order follows unsuccessful efforts in Congress to enact a moratorium or funding-penalty mechanism to deter state AI laws. In response, Senator Edward Markey introduced the “States’ Right to Regulate AI Act” to block the Executive Order’s implementation, labeling the order “lawless” and urging Congress to assert its authority. Separately, 23 state attorneys general wrote a letter urging the FCC to “stand down,” asserting the agency lacks authority to preempt broad state AI oversight and criticizing the notice of inquiry as vague and beyond the FCC’s legal remit. The AGs emphasized states’ interests in addressing deepfakes, scams, and consumer protection that are not telecommunications issues. 

FTC Sets Aside 2024 Order Banning AI Powered Writing Service: The Federal Trade Commission (“FTC”) took the extraordinary step of issuing an order to reopen and set aside a 2024 final consent order involving Ryter LLC (“Ryter”). The FTC determined after review that the complaint failed to satisfy the legal requirements of the FTC Act and that the order unduly burdens AI innovation in violation of the Trump Administration’s Artificial Intelligence Executive Order and America’s AI Action Plan. The 2024 final consent order against Rytr settled allegations that the company’s AI-enabled writing assistance service allowed subscribers to generate false and deceptive online reviews in violation of FTC Act. The final consent order, among other conditions, banned Rytr from providing any AI-enabled service generating consumer or customer reviews or testimonials. In its press release announcing the move, the FTC stated that it “will continue to hold accountable actors that use AI to violate the law or deceive consumers about the capabilities of their generative AI.”

House Committee Advances Children’s Privacy Bills: The U.S. House Energy and Commerce Committee advanced 18 bills aimed at strengthening online safety for children. Among the bills are COPPA 2.0 and the Kids Online Safety Act (“KOSA”). COPPA 2.0 seeks to update the Children’s Online Privacy Protection Act by expanding protections to teens up to age 16, requiring stricter consent mechanisms, and limiting targeted advertising. KOSA focuses on platform accountability, mandating that social media and online services implement safeguards to reduce harmful content exposure and provide tools for parental oversight. Both KOSA and COPAA 2.0 required party-line rollcall votes to advance, indicating that they may face hurdles in the Senate even if they pass the House of Representatives. Other bills address transparency in algorithms, mental health impacts, and data minimization practices for minors. The legislative package reflects growing concern over the influence of digital platforms on youth well-being and privacy. If enacted, these measures would impose significant compliance obligations on tech companies, including enhanced reporting, risk assessments, and user control features, while giving regulators stronger enforcement authority.

CISA Release Mobile Communications Best Practices for PRC Espionage Targets: The Cybersecurity & Infrastructure Security Agency (“CISA”) released guidance covering security best practices for mobile device communications for “highly targeted” individuals, such as senior government, military, and political positions who are likely to possess information of interest to these Chinese government backed threat actors. The Guidance includes recommendations such as use of only end-to-end encrypted communications, enabling fast identity online, migrating away from SMS based multi-factor authentication, using a password manager and regularly updating software, among other things. The guidance was prepared in response to identified cyber espionage activity by the People’s Republic of China.


U.S. LITIGATION

Challenge to Maryland Age-Appropriate Design Act Survives Motion to Dismiss: The U.S. District Court for the District of Maryland denied Maryland’s motion to dismiss NetChoice’s challenge to the state’s 2024 Age-Appropriate Design Code Act (the “Act”), while keeping the law in effect during litigation. The Court held that Netchoice plausibly alleged that the Act’s “best interests of children” and “reasonably likely to be accessed by children” standards, as well as restrictions on data, profiling, and “dark patterns,” burden protected editorial curation. The Court also found the Act’s requirement to engage in data privacy impact assessments plausibly compels speech. The Court deferred deciding the proper level of scrutiny. NetChoice has successfully mounted First Amendment challenges to similar laws in other states. 

Split 11th Circuit Allows Enforcement of Florida Social Media Ban for Children: The Eleventh Circuit granted Florida’s motion to stay a district court’s preliminary injunction and allow enforcement of a Florida law restricting minors’ access to social media platforms that employ specified “addictive features,” pending appeal. The Court concluded Florida made a strong showing of likely success on the merits of the First Amendment challenge. The panel deemed the Florida law content neutral because it regulates platform features and form—not topics or viewpoints—and therefore intermediate scrutiny applies. Under that standard, the Court found Florida’s interest in protecting minors substantial and the law sufficiently tailored given its age gradations, focus on platforms with high minor engagement, and feature-based triggers rather than categorical bans. In balancing the equities, the Court found state enforcement interests and child protection outweigh alleged compliance burdens, supporting the stay. Judge Rosenbaum dissented, finding the Act plainly unconstitutional even under intermediate scrutiny.

Court Temporarily Blocks Arkansas Social Media Safety Law: A United States District Court enjoined Arkansas from enforcing a law imposing liability on social media platforms for violating a state law prohibiting the use of designs, algorithms, or features that cause users to purchase controlled substances, develop eating disorders, attempt suicide, or develop social media addiction. The law is being challenged by NetChoice. The Court granted NetChoice’s motion for a preliminary injunction, finding the Arkansas law likely unconstitutional on First Amendment and vagueness grounds. The Court held that the Act’s content-based prohibitions are presumptively unconstitutional and not narrowly tailored, burdening both platforms’ editorial judgment and users’ rights to speak and receive speech, while being simultaneously overinclusive and underinclusive. 

Louisiana Social Media Law Ruled Unconstitutional: The Middle District of Louisiana granted NetChoice’s motion for summary judgment and permanently enjoined enforcement of Louisiana’s Secure Online Child Interaction and Age Limitation Act (the “Act”) as applied to 10 identified member platforms, concluding the law violates the First Amendment. The Court found the Act’s coverage definition content based, targeting platforms that “allow users to interact socially” while exempting others, and therefore subjected all provisions to strict scrutiny. The Court held that the Act did not meet that exacting standard. The Court found that the Act impermissibly burdens minors’ and adults’ access to protected speech, is simultaneously over‑ and under‑inclusive, and unlawfully restricts platforms’ editorial judgment. 


U.S. ENFORCEMENT

Texas Attorney General Sues TV Companies over Personal Data Collection; Secures Injunction Against Chinese Television Company: Texas Attorney General Ken Paxton filed suit against five television manufacturers alleging the companies spied on Texans by secretly recording what consumers watch in their own homes. The suits were filed against Sony, Samsung, LG, as well as Hisense and TCL Technology Group Corporation (“TCL”). Hisense and TCL are both based in China. The Attorney General stated the companies’ “Chinese ties pose serious concerns about consumer data harvesting and are exacerbated by China’s National Security Law, which gives its government the capability to get its hands on U.S. consumer data.” The Attorney General also secured a temporary restraining order against Hisense that prevents the company from collecting, using, selling, sharing, disclosing, or transferring data about Texans collected through automatic content recognition (“ACR”) technology. The lawsuits allege that ACR captures audio and visual data of what viewers watch on television “in hundredths of milliseconds, to build a fingerprint of the content and then matches that fingerprint with a database of known content.” The lawsuits accuse the companies of falsely misrepresenting that the ACR features are designed to provide viewers with a tailored viewing experience.

CPPA Fines Data Broker for Failing to Register Under California Delete Act: The CPPA announced it issued a decision requiring ROR Partners LLC (“ROR”), a Nevada-based marketing firm catering to fitness and wellness brands, to pay $56,600 in fines and past-due fees for failing to register as a data broker in violation of California’s Delete Act. The CPPA brought the case as part of its Data Broker Enforcement Strike Force, announced a few weeks ago. According to the decision, ROR used “billions of data points” to build detailed consumer profiles and custom audience lists that its clients could use for targeted advertising. Its activities resulted in a “rich repository” of demographic, socioeconomic, and behavioral data about more than 262 million Americans. ROR then used the data to draw inferences about consumers and their expected behaviors and sell that information to health clubs for targeted advertising. The company engaged in these activities in 2024 without registering in the California Data Broker Registry.

FCC Settles with Cable Communications Provider over Vendor Breach: The Federal Communications Commission (“FCC”) entered into a Consent Decree with Comcast Cable Communications (“Comcast”) to settle allegations that Comcast violated the Cable Communications Policy Act of 1984 (“Cable Act”) in connection with a vendor breach that compromised the personally identifiable information of Comcast Cable Subscribers. As part of the consent decree, Comcast will pay a $1.5 million voluntary contribution and implement a compliance plan that includes, among other things, certain vendor oversight practices related to customer privacy and information protection.

Hawaii Attorney General Sues Social Media Company for Harming Children: The Hawaii Attorney General announced it filed a lawsuit against Bytedance Inc., the parent company of TikTok, alleging that the social media giant knowingly designed an addictive platform that harms users, particularly children, while misleading the public about the risks. The Hawaii Attorney General alleges in its complaint that TikTok employs what its own employees have described as “coercive design tactics,” which are especially harmful to children. These features are engineered to influence users’ neurobiology, especially dopamine production, in ways similar to tactics used in the gambling industry, compelling users to spend as much time as possible on the platform. The state is also alleging in its complaint that TikTok continues to maintain inadequate age verification and child protection systems.

Florida Attorney General Issues Subpoena to Chinese Consumer Electronic Goods Company over Handling of U.S. Data: Florida Attorney General James Uthmeier announced his issuance of an investigative subpoena to TP-Link Systems Inc. as part of an ongoing consumer protection investigation into the company’s cybersecurity practices, supply-chain infrastructure, and handling of U.S. consumer data. U.S. Department of Commerce officials proposed a ban on TP-Link Systems, citing that the products pose a risk because the U.S.-based company handles sensitive American data and officials believe it remains subject to jurisdiction or influence by the Chinese government.

Arizona Attorney General Sues Chinese Online Shopping Platform for Privacy Violations and Misleading Consumers: Arizona Attorney General Kris Mayes announced the filing of a lawsuit against Temu, the Chinese online shopping platform, for violations of the Arizona Consumer Fraud Act, including unlawful data collection, violations of customers’ privacy, and counterfeiting. The complaint alleges that the Temu app is allegedly designed to harvest sensitive user data without users’ knowledge or consent and to evade detection. According to the lawsuit, Temu allegedly collects an alarming amount of sensitive user data and personally identifiable information that goes far beyond what is necessary for a typical online shopping app. According to the lawsuit, the app secretly infiltrates users’ devices to access and harvest sensitive information, including the user’s precise physical location, the phone’s microphone and camera, and the user’s private activity on other apps installed on the phone, all without their knowledge or consent. Attorney General Mayes also alleges Temu has engaged in deceptive and unfair trade practices when selling products and in the resolution of customer complaints.

FTC Settles Complaint with EdTech Provider Relating to Data Breach: The FTC finalized an order resolving a complaint in which the FTC alleged that Illuminate Education, Inc. (“Illuminate”) claimed to protect the privacy and security of the data it maintains but failed to deploy reasonable security measures to protect student data stored in cloud-based databases. Illuminate suffered a major data breach in late 2021. The FTC alleged that a hacker used the credentials of a former employee who had departed Illuminate three and a half years prior to breach Illuminate’s databases stored on a third-party cloud provider. The hacker gained access to personal data of 10.1 million students, including their e-mail and mailing addresses, dates of birth, student records, and health-related information. Under the proposed order, Illuminate is prohibited from misrepresenting its data privacy and security practices and will be required to take steps to bolster its security practices.

FTC Settles with Cryptocurrency Platform over Lax Security: The FTC announced that it had issued a proposed order to settle FTC allegations that cryptocurrency company Illusory Systems Inc. d/b/a Nomad (“Nomad”) failed to implement adequate security measures leading to a breach in which hackers stole $186 million from Nomad customers. The FTC alleged that Nomad prominently touted its security in its advertising, claiming that it offered “security-first” services. However, the FTC alleged that the company failed to live up to these promises by failing to use secure coding practices, implement processes for receiving and addressing vulnerability reports and responding to security incidents, or utilize widely known technologies that might have helped mitigate consumer losses. The proposed order requires Nomad to implement an information security program to address alleged security failures, obtain biennial assessments of its information security program by an independent third party, and to return recovered money to affected consumers.  

Daniel R. Saeedi, Rachel L. Schaller, Ana Tagvoryan, Gabrielle N. Ganze, P. Gavin Eastgate, Timothy W. Dickens, Karen H. Shin, Amanda M. Noonan, and Sierra N. Lactaoen contributed to this article

HB Mobile Ad Slot
HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters