STATE & LOCAL LAWS & REGULATIONS
CPPA Releases Revised Draft Cybersecurity Audit Regulations
The California Privacy Protection Agency (“CPPA”) released a revised draft of the regulations governing cybersecurity audits under the California Privacy Rights Act (“CPRA”). The revised draft no longer requires businesses with a certain undefined annual gross revenue or number of employees to complete a cybersecurity audit. Instead audit requirements apply to businesses that have a certain (as yet to be defined) annual gross revenue and meet one of three (as yet to be defined) thresholds based on the amount of personal information, sensitive information, or children’s information that the business processes annually. The revised draft also requires cybersecurity audits to assess and document any risks from cybersecurity threats that have materially affected or are reasonably likely to materially affect consumers. The CPPA has not yet started the formal rulemaking process and these revised regulations are solely meant to facilitate discussions within the CPPA board and the public.
CPPA Releases Revised Draft Automated Decision-Making Technology Regulations
The CPPA published draft regulations governing automated decision-making technology (“ADMT”) under the CPRA. The draft regulations propose a broad definition for ADMT that includes any system, software, or process that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decision-making. The draft regulations provide consumers with the right to access information about a business’s use of ADMT and opt-out of uses of ADMT for decisions that produce legal or similarly significant effects or that involve profiling: (1) in their capacity as an employee, student, job applicant, or independent contractor; or (2) in a publicly accessible place. The draft regulations also require businesses to provide notice about a business’ use of ADMT, the purposes for using the ADMT, and the consumer’s rights to access or opt-out. As with the revised draft cybersecurity audit regulations, the CPPA has not yet begun the formal rulemaking process.
Colorado Attorney General Publishes Universal Opt-Out Shortlist
The Colorado Attorney General has published the Universal Opt-Out Shortlist. Under the Colorado Privacy Act (“CPA”), Colorado consumers have the right to opt out of the sale of personal data and the processing of personal data for purposes of targeted advertising. Beginning July 1, 2024, organizations subject to the CPA must allow for the exercise of the opt-out right through a Universal Opt-Out Mechanism (“UOOM”). The CPA’s implementing regulations require the Colorado Attorney General to maintain a public list of UOOMs that have been recognized to meet the standards set forth in the regulations. The Colorado Department of Law accepted applications from potential UOOMs from October 5, 2023, to November 6, 2023, and has narrowed them down to three potential UOOM applications: (1) OptOutCode; (2) Global Privacy Control; and (3) Opt-Out Machine. Public comments on the UOOMs will be accepted until 11:59 p.m. MST on December 11, 2023.
New York Amends Its Cybersecurity Regulation Related to Financial Institutions
The New York State Department of Financial Services (“NYDFS”) recently amended its Cybersecurity Regulation to protect against growing cyber threats to covered financial institutions. The new amendments strengthen the framework of the law and will require NYDFS-covered entities to adhere to various requirements, including (1) additional controls to prevent initial unauthorized access to information systems, (2) more regular risk and vulnerability assessments, (3) reporting ransom payments to the NYDFS within 24 hours of payment, and (4) an updated direction for companies to invest in annual training and cybersecurity awareness programs that anticipate social engineering attacks. Regulated entities will have until April 29, 2024, to fully comply with the amended Regulation.
NYDFS Fines Title Insurer for Breach of Personal Data
The New York State Department of Financial Services (“NYDFS”) announced that First American Title Insurance Company (“First American”) will pay one million dollars for violating NYDFS cybersecurity regulations. First American is the second-largest title insurance company in the United States and stores personal and financial data on a proprietary, consumer-facing app called EaglePro. EaglePro allows parties to render images of title searches and other real estate documents. In a consent order published on November 27, 2023, DFS stated that First American had failed to ensure full and complete implementation of their cybersecurity policies and procedures prior to a May 2019 data breach. This breach involved a vulnerability in EaglePro that allowed any individual with an access link to the app to also access documentation of individuals unrelated to their own transactions. NYDFS ultimately found that 885 million documents were exposed to the public.
California Bar Adopts AI Guidelines
The California State Bar approved the Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law (the “Guidance”), a living document of guiding principles intended to assist attorneys in navigating their professional and ethical obligations when using generative artificial intelligence (“AI”) and AI tools. Among other things, the Guidance advises against inputting client information into AI tools without sufficient confidentiality and security protections and recommends explaining all fees and costs associated with the use of generative AI under a fee arrangement. Additional research and recommendations are expected regarding the use of AI to expand access to justice while protecting clients and the public; supervision of non-human, non-lawyer assistance using autonomous decision-making; whether the duty of competency should specifically require competency in generative AI; and whether (and in what contexts) lawyers should disclose the use of generative AI to clients.
FEDERAL LAWS & REGULATIONS
Bipartisan AI Legislation Introduced in Senate
The Artificial Intelligence Research, Innovation, and Accountability Act, S.B. 3312, (the “Act”) was introduced in the Senate to establish a framework to bolster innovation while addressing transparency, accountability, and security concerns and potential harms for the highest-risk applications of artificial intelligence (“AI”). The Act is supported by all members of the Senate Committee on Commerce, Science, and Transportation, which holds jurisdiction over agencies such as the National Institute of Standards and Technology (“NIST”). The Act proposes new transparency (e.g., reports and consumer disclosures) and certification requirements for AI systems that pose a significant risk to constitutional rights or safety affecting: (i) individuals’ access to housing, employment, credit, education, healthcare, or insurance (“high-impact”) or (ii) real-time or ex post facto biometric data collection, critical or space-based infrastructure, or criminal justice. If passed, violators of the Act could be subject to civil penalties up to $300,000 or twice the amount of the value of the transaction at issue.
FCC Proposes Expanded Data Breach Reporting Requirements
The Federal Communications Commission (“FCC”) published a draft Report and Order (the “Draft Order”) to modify its data breach notification rules in response to evolving threats since the FCC first adopted these rules, and to ensure that telecommunications providers and carriers and telecommunications relay services (“TRS”) adequately protect against improper use and disclosure of consumer data. The Draft Order seeks to: (i) expand the scope of the breach notification rules to cover all personally identifiable information held by carriers and TRS about their customers; (ii) expand the definition of “breach” to include inadvertent access, use, or disclosure of consumer information; (iii) require breach notification reporting to governmental authorities for breaches affecting 500 or more customers within seven business days and annual reports for breaches affecting fewer than 500 customers; (iv) eliminate the customer breach notification requirement where such breach would not reasonably result in harm to the consumer; and (v) eliminate the mandatory waiting period to notify customers of a breach of covered data and require, instead, notice within 30 days of reasonably determining a breach occurred.
FTC Complaint against Kochava Unsealed
The Federal Trade Commission’s (“FTC”) motion to unseal its amended complaint against Kochava Inc., a mobile app analytics provider and data broker (“Kochava”), revealed details of Kochava’s alleged illegal data collection, selling, and sharing practices. The FTC claims that Kochava illegally uses its Mobile Advertising ID system (“MAIDs”) to collect a “staggering amount” of consumers’ sensitive, personally identifying information, including, but not limited to, gender, ethnicity, income, marital status, app usage and interest, and precise geolocation data, and sells to its customers a “360-degree perspective” on consumers’ behavior through subscriptions to its data feeds, without the consumer’s knowledge or consent. The federal court specifically found plausible the FTC’s assertion that Kochava’s geolocation data, although anonymous on its face, is linked by MAIDs and could enable third parties to identify specific individuals associated with those data sets and, therefore, reveal sensitive information such as visits to places of religious worship, mental health facilities, and reproductive healthcare clinics, down to the exact room visited.
U.S. ENFORCEMENT
HHS OCR Settles with Doctor’s Management Services for Ransomware Attack
The U.S. Department of Health and Human Services (“HHS”) Office for Civil Rights (“OCR”) has settled with Doctor’s Management Services related to alleged violations of the Health Insurance Portability and Accountability Act’s (“HIPAA’s”) Privacy and Security Rules for a ransomware attack that affected the electronic protected health information of 260,695 individuals. Under the settlement agreement, Doctor’s Management Services must pay $100,000 to HHS OCR and comply with a three-year corrective action plan that requires Doctor’s Management Services to: (1) identify the potential risks and vulnerabilities to the confidentiality, integrity, and availability of its electronic protected health information; (2) update its enterprise-wide risk management plan to address and mitigate any security risks and vulnerabilities identified; (3) review and revise, if necessary, its written policies and procedures to comply with the HIPAA Privacy and Security Rules; and (3) provide workforce training on HIPAA policies and procedures.
HHS OCR Settles with Saint Joseph’s Medical Center for Disclosure of PHI
HHS OCR has settled with Saint Joseph’s Medical Center for potential violations of HIPAA’s Privacy Rule for the impermissible disclosure of COVID-19 patients’ protected health information to a national media outlet. OCR investigated Saint Joseph’s Medical Center after the Associated Press published an article about the medical center’s response to the COVID-19 public health emergency, which included photographs and information about the facility’s patients. Under the settlement agreement, Saint Joseph’s Medical Center must pay $80,000 to HHS OCR and implement a two-year corrective action plan, in which Saint Joseph’s Medical Center must develop and implement written policies and procedures that comply with the HIPAA Privacy Rule and train its workforce on the revised policies and procedure.
SEC Files Lawsuit against SolarWinds and CISO
The Securities and Exchange Commission (“SEC”) filed a complaint against SolarWinds Corp. (“SolarWinds”) and its then-vice president of security and architecture (effectively, its chief information security officer), Timothy G. Brown (“Brown”), for allegedly defrauding SolarWinds’ investors and customers through misrepresentations about its cybersecurity practices, risks and vulnerabilities. The complaint alleges that, at least between SolarWinds’ October 2018 initial public offering and its December 2020 announcement that it was the target of a nearly two-year long cyberattack, in contrast to internal communications, SolarWinds and Brown overstated SolarWinds’ cybersecurity practices and misrepresented or omitted specific known deficiencies in presentations to investors and disclosure filings to the SEC. The complaint seeks injunctive relief, disgorgement with pre-judgment interest, civil penalties, and a permanent officer and director bar against Brown.
FTC Orders Prison Communications Providers to Notify Consumers of Future Data Breaches
The Federal Trade Commission (“FTC”) will require prison communication company Global Tel*Link Corp. and two of its subsidiaries (collectively, “Tel*Link”) to notify consumers of any future data breaches. In its complaint, the FTC stated that Tel*Link failed to implement adequate security safeguards to protect its users’ personal information. The complaint states that Tel*Link and a third-party vendor copied a large volume of sensitive unencrypted data into the cloud but failed to adequately protect this data, which included full names of users, dates of birth, phone numbers, passwords, and Social Security numbers. In addition, the FTC found that Tel*Link waited nine months to notify affected customers, and only contacted a fraction of the affected users. The FTC’s proposed order contains numerous requirements for Tel*Link, including (1) implementing a comprehensive data security program, (2) notifying users within 30 days of future data breaches or security incidents triggering regulatory reporting requirements, (3) and notifying the FTC within 10 days of reporting a security incident to any local, state, or federal authorities.
INTERNATIONAL LAWS & REGULATIONS
G7 Leaders Agree on AI Guiding Principles and Voluntary Code of Conduct for AI Developers
G7 leaders announced they reached agreement on International Guiding Principles for Organizations Developing Advanced Artificial Intelligence Systems and a voluntary Code of Conduct for AI Developers. The 11 guiding principles are intended to assist organizations promote safety and trustworthiness of artificial intelligence (“AI”) technology. The principles include publicly reporting AI system capabilities and limitations to promote transparency and accountability; taking appropriate steps to identify, evaluate, and mitigate risks across the lifecycle of AI systems; engaging in appropriate information sharing regarding AI system incidents; and investing in cybersecurity. The code of conduct builds on the guiding principles by providing additional detailed guidance on responsible development, deployment, and use of AI systems. G7 leaders have called on organizations to publicly commit to the code of conduct.
29 Countries Agree to Bletchley Declaration on Risks and Opportunities for Artificial Intelligence
29 countries, including the United States, China, the United Kingdom, France, and Brazil signed the Bletchley Declaration setting forth a shared understanding of opportunities and risks posed by AI. The declaration sets forth an agenda for future cooperation focusing on identifying AI safety risks of shared concern and building a shared scientific and evidence-based understanding of these risks as well as building risk-based policies across participating countries to ensure safety in light of such risks. The signatories committed to maintaining a dialogue on these issues to support international cooperation and Korea will co-host a mini virtual summit on AI within the next six months, after which France will host an in-person summit in the fall of 2024.
Proposed Amendments to Canada Artificial Intelligence Legislation Submitted
The Canadian Minister of Innovation, Science and Industry submitted proposed amendments to Bill C-27. Bill C-27 would overhaul Canada’s federal privacy legislation and also enact the Artificial Intelligence and Data Act (“AIDA”), which is intended to govern deployment of AI technologies in Canada. The AIDA would impose significant obligations on organizations that create and use AI systems, including assessment and record keeping, risk mitigation, transparency requirements, and reporting obligations relating to “high impact” systems. Violations of the AIDA could result in penalties ranging from the greater of 10 million Canadian dollars or 3 percent of an entity’s global revenues to the greater of $25 million Canadian dollars or 5 percent of an entity’s global revenues. The proposed amendments are intended to provide additional clarity to the AIDA by defining classes of systems that are considered “high impact” and specifying obligations for general purpose AI systems, among other things. The amendments identify seven “high-impact” areas: employment, whether or not to provide a service to an individual, biometric information, content moderation, healthcare and emergency services, legal decisions, and law enforcement.
Tianmei Ann Huang, Amanda M. Noonan, and Jason C. Hirsch contributed to this article.