The Oregon AG’s Office, along with the state’s Department of Justice, issued guidance late last year on how state laws apply to the ways businesses use AI. The guidance may be two months old, but the cautions are still timely. The guidance seeks to give companies direction on times when AI uses might be regulated by existing state laws.
As outlined in the guidance, the Oregon state laws that may apply to a company’s use of AI include a variety of consumer protection laws. Namely, the state’s “comprehensive” privacy law (the Consumer Privacy Act), its Unlawful Trade Practices Act, the Equality Act, and its data security law (the Consumer Information Protection Act). Some key takeaways from the guidance:
- Notice: A reminder to companies that they could be viewed as violating Oregon’s “comprehensive” privacy law if they do not disclose how they use personal information with their AI tools. Additionally, the AG may view it as a violation of Oregon’s Unlawful Trade Practices Act if they do not explain a potential “material defect” with an AI tool. For example, a business that places a third-party virtual assistant program on its website, but the tool is known to give incorrect information.
- Choice: The guidance reminds companies that under Oregon’s privacy law, consent is required before processing sensitive information, which may occur if putting that information into AI tools. In addition, the guidance reminds companies that the same law requires giving consumers the ability to (a) withdraw consent (when such consent was required to process information) and (b) opt out of AI profiling for significant decisions. Companies will need to keep this in mind, inter alia, when inputting personal information into AI tools.
- Transparency: The guidance outlines some potential AI uses that might violate the state’s Unlawful Trade Practices Act. For example, not being clear that someone is interacting with an AI tool. Or, misleading individuals about the AI’s capabilities or how the company will use AI-generated content. Another example given is using AI-generated voices for robocalling without accurately disclosing the caller’s identity.
- Bias: The guidance states that using AI in a way that discriminates based on race, gender or other protected characteristics would violate Oregon’s Equality Act.
- Security: The guidance reminds companies of the obligations of the state’s data security law. Thus, if an AI tool incorporates personal information, or a business uses personal information in connection with the tool, it will need to keep that law’s obligations in mind. These include obligations to have in place “reasonable safeguards” to protect personal information.