The California Privacy Protection Agency’s (CPPA) highly anticipated regulations for automated decision-making technology and risk assessment requirements are likely far from final. The CPPA met at the beginning of the month but did not come to a consensus on what the final regulations should look like.
The CPPA’s vote was expected to be procedural but the final review to begin formal rulemaking will now not begin until the summer. The CPPA’s General Counsel, Phil Laird, stated that the rulemaking process may not be completed until sometime in 2025.
The CPPA will continue developing the final rules to govern how developers of automated decision-making technology (ADMT) (which includes artificial intelligence (AI)) and businesses using such technology can obtain and use personal information. The rules are also expected to include specific details on how to collect opt-outs and when risk assessments must be conducted. Risk assessments would be required when training ADMT or AI models that will be used for significant decisions, profiling, generating deepfakes, or establishing identity.
Further, personal information of minors would be classified as sensitive personal information under the California Consumer Privacy Act/California Privacy Rights Act, and “systematic observation,” which is the consistent tracking by use of Bluetooth, wi-fi, drones or livestream technologies that can collect physical or biometric data, would qualify as “extensive profiling” when used in a work or educational setting.
So, where do we stand on these potential requirements? Without a unanimous vote from the CPPA on the proposed regulations, the CPPA will take another two months to rework the rules and get all members in alignment.