As reported previously, the California Privacy Protection Agency (“CPPA”) closed the public comment period for its proposed cybersecurity audit, risk assessment and automated decision-making technology (“ADMT”) regulations (the “Proposed Regulations”) in late February. In advance of the CPPA’s April 4 meeting, the CPPA released a new draft of the Proposed Regulations, which proposed relatively minor substantive changes, but pushed back the dates for when certain obligations would become effective. The Agency’s Board met on April 4, 2025, to discuss the new proposals and comments received, as well as the potential for some very different alternatives, especially related to ADMT. Members of the CPPA Board debated the staff’s approach and ultimately sent the staff back to narrow the scope of the Proposed Regulations, clarify what was in and out of scope with more examples, and to further consider how to reduce the costs and burdens on businesses. While it is unclear exactly what staff will come back with, the alternatives discussed provide some hints on what a more constrained approach may look like.
Likely revisions are focused on six items discussed:
- Definition of “automated decision-making technology” (ADMT)
- Definition of “significant decision”
- “Behavioral advertising” threshold
- “Work or educational profiling” and “public profiling” threshold
- “Training” thresholds
- Risk Assessment Submissions
Definition of “Automated Decision-making Technology” (ADMT)
The first discussion item included three proposed alternatives to the current ADMT definition. All the alternatives narrow the definition from that of the current Proposed Regulations, some significantly:
Alternative 1: Would still cover use to assist or replace human decision making, but would provide more description on what processes apply, and add a material consumer impact requirement.
Alternative 2: Would limit the definition to where the processing substantially replaces human decision making.
Alternative 3: Would limit the definition to where the processing replaces human decision making for the purpose of “making a solely automated significant decision about a consumer.”
The Board did not reach a consensus as to how to narrow the definition of ADMT, but expressed concern with the current broad scope of the ADMT definition, and a desire to see an alternative from staff that assuaged these concerns.
Definition of “Significant Decision”
The heart of the ADMT and Profiling provisions regulate where such processing can result a “significant decision,” defined as access to, or provision or denial of certain listed types of goods and services. Board Member Alistair Mactaggart raised concerns that the phrase “access to” was overly broad and could include a wide array of information services, including maps apps and other items used to route or otherwise direct a consumer to a covered service. He provided an example wherein a consumer uses a maps app to route them an emergency room or a bank. The staff’s presentation included replacing “access to” with “selection of a consumer for,” or to delete it altogether.
Other Board Members, including Drew Liebert, raised concerns that in the employment context “allocation or assignment of work,” as a form of significant decision, could include actions like selecting a specific delivery driver based on proximity. Staff’s proposed alternatives included deleting this type of decision, as well as others including insurance and criminal justice and narrowing the scope of “essential goods or services.”
The Board directed staff to return with more examples of use cases to demonstrate what is and is not within the scope of a significant decision and how various potential definition changes could affect those examples.
Behavioral Advertising
Proposed changes in this section of the draft regulations on “extensive profiling” stand to significantly alter the scope of the Proposed Regulations, which propose to expand the current concept of Cross-Context Behavioral Advertising to include first-party behavioral data-driven ad targeting. The Board spent less time discussing this issue and ultimately seemed to direct the staff to implement the proposed change, which deletes the behavioral advertising use case from the requirements for risk assessments and ADMT completely.
Work or Educational Profiling and Public Profiling
Similar to the significant decision issue, the Board was concerned that, as written, the scope of Proposed Regulations might encompass uses cases that do not fall under the spirit of the regulations. Board Member Mactaggart, specifically, raised concerns that this section of the regulation is changing the character of the law from a privacy law to an employment law. The staff did not present any specific alternatives to the Proposed Regulations as to these types of “extensive profiling.” The Board seemed to reach consensus for requesting staff to provide additional information including use cases that might help inform the scope of the regulations.
AI and ADMT Training
The staff-suggested potential changes regarding AI and ADMT training thresholds took two forms. One would narrow the scope of the rule, by limiting the requirements to where the business knows or should know that the technology will be used for the currently restricted purposes, as opposed to the current capability of use standard, while the other would delete the training thresholds completely. The Board engaged in considerable discussions, including regarding whether the language could be changed to only require risk assessments from entities that definitively used ADMT (based on a new, narrower definition). This stemmed from similar concerns underlying the other issues, that as written, the regulations would potentially apply to entities that were not really engaged in risky privacy behaviors. However, staff explained that in order for pre-use risk assessments to remain an element of the regulations, there must be some way to include potential uses.
The Board directed staff to follow the second recommendation, which would remove the artificial intelligence applicability to the training threshold. Staff was also directed to revise the requirements to apply only to businesses that are actively using or are planning to use ADMT.
Risk Assessment Submissions
The Board’s discussion on the risk assessments went beyond the staff’s issue slide regarding the summary submission process. Specifically, the Board contemplated changes that would totally revamp the required elements of risk assessments. Primarily motivated by concerns of the cost for businesses, members of the Board asked staff whether the regulations can better reflect other jurisdictions’ risk assessment frameworks (e.g., Colorado). Staff was directed to determine the feasibility of mirroring the risk assessment language to other jurisdictions, especially Colorado, to ensure that businesses conduct risk assessments need not tailor them to each state and incur significant costs in the process.
Legal Challenge Concerns
Board Member Mactaggart also raised concerns about the legality of some of the Agency’s proposed regulations, including constitutional concerns like First Amendment rights with respect to risk assessments and whether the cybersecurity audit requirements exceed the Agency’s statutory authority. Privacy World’s Alan Friel and Glenn Brown (in their personal capacities) have previously addressed the First Amendment concerns raised by risk assessments. Board Member Mactaggart requested that Agency staff provide a report to the Board regarding these litigation risks. Other Board members expressed concern regarding the confidentiality on any such analysis. No firm plan for staff was reached in this regard.
Next Steps
A timeline was not set for developing revised Proposed Regulations and otherwise addressing Board concerns, but the potential for considering staff responses at a July Board meeting was discussed. It is unclear how extensive changes will need to be in order to get a majority of the Board to vote a version of the Proposed Regulations forward. However, if the scope of changes is consistent with the direction at least some on the Board seem to be giving staff, a new 45-day public comment period would seem likely, even if a shorter 15-day period were to be applied to other proposed edits. It would seem that the CPPA has a long way to go and will need to more narrowly construct rules that are more aligned with other U.S. states. We will continue to monitor developments on this rulemaking process or other Agency actions.
Samuel Marticke contributed to this article.