AI may be both the most “powerful capability of our time” and the “most powerful weapon of our time.”
That’s according to Jen Easterly, Director of the Cybersecurity and Infrastructure Security Agency, when interviewed at Vanderbilt University in Nashville, Tennessee two months ago.1 Since then, many would agree that AI regulation is needed, hence the global tidal wave of proposed laws and regulations focused on safety, reliability and other risks.
Despite this environment, last Friday’s announcement by the White House of voluntary commitments from seven leading AI companies2 to manage the risks posed by AI was nonetheless surprising. The announcement prominently included eight broad commitments with three types of key cybersecurity and data privacy commitments that have proven elusive earlyon with other emerging technologies:
-
Under “Ensuring Products are Safe Before Introducing Them to the Public” the AI companies agree to “commit to internal and external security testing of their AI systems before their release.”
-
Under “Building Systems that Put Security First” the AI companies agree to “commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.”
-
Under “Earning the Public’s Trust” the AI companies agree to “commit to prioritizing research on the societal risks that AI systems can pose, including … protecting privacy."3
Although these commitments support the new National Cybersecurity Strategy released on March 2nd by the White House to “Shape Market Forces to Drive Security and Resilience,"4 it remains to be seen whether they will be enforced by holding the stewards of data accountable, and/or shifting liability for insecure software products and services.5
Meanwhile, the Federal Trade Commission continues to be an early mover in shaping standards for generative AI. Two weeks ago, the FTC served OpenAI with a Civil Investigative Demand seeking detailed responses to almost two hundred data requests, about a third of which related to OpenAI’s cybersecurity and data privacy practices.6
The data requests reveal new cybersecurity issues the FTC has identified relating to generative AI. For example, Data Request No. 39 asks to “[l]ist all instances of known and actual attempted “prompt injection” attacks” and then footnotes this to mean “any unauthorized attempt to bypass or manipulate … using prompts that ignore previous instructions.”
The data requests also reveal new data privacy issues. Data Request No. 45, for example, asks OpenAI to describe in detail data privacy practices relating to how and why all Personal Information data is collected, used, analyzed, stored or transferred. Data Request Nos. 46-49 ask OpenAI to describe in detail its data retention, deletion and deidentification practices.
The FTC’s Chair, Lina Khan, has also previously noted that “[p]olicing data privacy and security is now a mainstay of the FTC’s work” and “we must update our approach to keep pace with new learning technologies and technological shifts."7 It’s still early days, but with last week’s voluntary commitments and the CID the week before, approaches are updating rapidly.
[1] https://cyberscoop.com/easterly-warning-weapons-artificial-intelligence-chatgpt/
[2] Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.
[5] https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf at 19-20.
[6] https://www.washingtonpost.com/documents/67a7081c-c770-4f05-a39e-9d02117e50e8.pdf?itid=lk_inline_manual_4 at 12-17.