On May 3rd, Lina Khan, the Chair of the Federal Trade Commission, made clear that the FTC is well equipped to handle the issues brought to the fore by the A.I. sector, “including collusion, monopolization, mergers, price discrimination and unfair methods of competition.”1 The increasing use of A.I. has also raised substantial cybersecurity issues.
Interest in artificial intelligence systems, “which can act in ways unexpected to even their own creators,”2 has recently skyrocketed. An array of issues were largely ignored despite their decades-long existence until a few months ago, when seemingly all at once, A.I. began to experience the same legal wranglings that other emerging technologies have been confronting for years.
It started with a data breach. On Friday, March 24, ChatGPT announced that several days earlier, during a nine-hour window, it was possible for some users “to see other user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date.”3 More than a million users were impacted.
Then, within days, the open letter titled Pause Giant AI Experiments (for six months) was circulated (more than 27,000 technologists have signed it),4 a Complaint was filed with the Federal Trade Commission alleging that ChatGPT was “a risk to privacy and public safety,”5 and Italy’s data protection authority banned it over data privacy concerns.6
As noted in our February article addressing A.I. law and policy, risk management guidance for A.I. has arrived.7 On March 20, 2023, the National Institute of Standards and Technology (NIST) updated a framework designed to address A.I. risks (AI RMF). NIST frameworks have a relatively short but significant history.
In 2013, an executive order was issued requiring government and private-sector organizations to collaborate on how to “maintain a cyber environment that encourages efficiency, innovation, and economic prosperity while promoting safety, security, business confidentiality, privacy, and civil liberties.”8
In 2014, NIST published the Cybersecurity Framework (CSF). The CSF has become so widely accepted that Ohio, Connecticut, and Utah have safe harbor statutes that state if an organization’s written information security program “reasonably conforms” to the CSF, it has an affirmative defense to civil tort claims stemming from a data breach.9
In 2020, NIST published a companion to the CSF for data privacy (PF). The PF is designed to help organizations keep up with technology advancements and new uses for data and improve risk management. Recently, Tennessee passed a comprehensive data privacy law that requires companies to have a written privacy program that conforms to the PF.10
A New NIST Framework for AI is Here
The AI RMF, like the CSF and PF, is intended to be flexible and “adapt to the A.I. landscape as technologies continue to develop” and “augment existing risk practices which should align with applicable laws, regulations, and norms.” The AI RMF lists “security breaches” as one of the main potential harms for organizations.
The AI RMF also refers to security concerns with “adversarial” A.I. – the practice of modifying A.I. systems to intentionally manipulate or attack – and states that only if A.I. systems maintain confidentiality, integrity, and availability through protection mechanisms can they be considered secure.11
But for all the effort that has gone into making the AI RMF easy to understand, it is not easy to apply just yet. It is abstract, like its central subject, and is expected to evolve and change substantially over time. For this reason, NIST has provided these companion resources:
- NIST AI RMF Playbook
- AI RMF Explainer Video
- AI RMF Roadmap
- AI RMF Crosswalk, and
- Various Perspectives12
Collectively, these resources are intended to help “individuals, organizations, and society” better manage A.I. risks. There again lies the rub, the AI RMF’s breadth – an attempt to address all risks for everyone. But another worthy companion exists, NIST’s Interagency Report 8286D released last November, focusing on cybersecurity risk management.13
The Interagency Report encourages the use of business impact analysis (BIA) for public and private sector cybersecurity professionals at all levels and emphasizes the importance of continually analyzing and understanding emerging risks to enterprise resources that enable enterprise objectives.
The Interagency Report describes how organizations can address risk factors that could have a material adverse effect on their financial position and ability to operate by doing such things as cataloguing and categorizing assets and following a risk analysis process. At its core, the report encourages continual efforts to understand potential business impacts.
Which brings us back to adversarial AI, of which Jonathan Care provides vivid examples in his article last month titled “Fight AI with AI.”
"[A]n autonomous vehicle that has been manipulated could cause a serious accident, or a facial recognition system that has been attacked could misidentify individuals and lead to false arrests. These attacks can come from a variety of sources, including malicious actors, and could be used to spread disinformation, conduct cyberattacks, or commit other types of crimes."14
Indeed, organizations such as Capital One are already using A.I. to curb fraud.15 Eventually, these capabilities will evolve into mandatory legal duties just as they have for such things as multi-factor authentication, cybersecurity incident response planning, and vendor and patch management programs.16
NIST Frameworks Are Favored
NIST frameworks incorporate industry standards that become law by establishing the appropriate legal duties. The FTC has previously stated that, in certain instances, if organizations had monitored security and communicated about cyberattacks based on guidance found in the CSF, they may not have violated consumers’ rights.17
After her confirmation in 2021, Ms. Khan declared that the FTC must update its approach to keep pace with new learning technologies and “technological shifts.” In February, March, and April the FTC issued warnings against overhyping A.I.’s capabilities,18 and maintaining core principles of fairness.19
For cybersecurity risk, the FTC has repeatedly encouraged organizations to evaluate and adjust their security programs “in light of any changes to operations or business arrangements,” including new or more efficient technological or operational methods that will have an impact. NIST’s AI RMF and Interagency Report should be considered for these purposes.
[10] https://www.polsinelli.com/elizabeth-liz-harding/publications/tennessee-information-privacy-act
[11] Id. at 15.