The State of California, under the leadership of Governor Gavin Newsom, has taken the lead of its sister states in mobilizing resources to investigate the risks of the use of generative artificial intelligence (GenAI) tools and develop policies addressing them.
Following in the steps of Colorado, this week, the Governor signed into law an amendment to the California Consumer Privacy Act that includes neural data as protected data covered by the law. The law applies to any devices that can record or alter nervous system activity, including implants and wearables. The amendment provides protection to neural data collected through neurotechnologies and equates it to other sensitive data collected from companies, including fingerprints, iris scans, and other biometric information.
The bill was supported by Neurorights Foundation, which stated that the law sends a “clear signal to the fast-growing neurotechnology industry” to protect people’s mental privacy. This means that private companies collecting brain data have to provide notice of collection to consumers, provide consumers the opportunity to limit disclosure to third parties, and to request deletion.
The amendment provides privacy guardrails applicable to neurotechnologies when other laws, like HIPAA, may not apply in order to protect the data from unauthorized collection, use, and disclosure.
In addition to signing the neuro data amendment into law, Governor Newsom announced that he has signed 17 bills “covering the deployment and regulation of GenAI technology…cracking down on deepfakes, requiring AI watermarking, protecting children and workers, and combating AI-generated misinformation.” He has convened experts in the field to study the threats of GenAI and develop “workable guardrails for deploying GenAI,” and “explore approaches to use GenAI technology in the workplace.”
The initiatives in California are designed to “protect Californians from fast-moving and transformative GenAI technology.” We have closely watched California’s efforts to tackle data privacy and security threats and issues over many years, as well as its response to them. California is usually at the forefront of the issues, and other states usually follow their lead (e.g., data breach notification, the California Online Privacy Protection Act, and the California Consumer Privacy Act). Watching California’s progress in responding to the risks of using GenAI is probably a good predictor of how other states will respond.. It would be preferable for Congress to take the lead on this issue, but as we have seen in the past, the hope of a national law in the face of fast-moving technology and its risks has never materialized. Because Congress is too slow to move, states are stepping in to protect their consumers, and we are poised to have a patchwork system of regulation for GenAI technology. This is not sound public policy for companies or consumers. Let’s hope Congress can get ahead of the curve, but for now, based on our long experience in watching the development of data privacy and security laws, we are going to continue to watch California’s efforts.