ChatGPT, an AI chatbot developed by OpenAI, recently passed a graduate level quantum computing exam – unsurprisingly, employers are keen to leverage its simple conversational interface and wide domains of knowledge to boost productivity. And ChatGPT isn’t the only such tool in town – others, such as Google Bard and Microsoft Bing (which is also powered by OpenAI) – promise similar results. Many tasks carried out by knowledge workers appear to be in the scope of these AI tools – the email announcing a team reorganization, writing copy for a top secret new product, and even creating complex code. Is this a good idea? It may be, but it is important to understand some of the potential risks, which are wide-ranging, and may result in the loss of intellectual property.
Consider this: Would you select a third party consultant without considering their background and training? Would you be comfortable sending confidential business information to that consultant without an NDA? Would you assume liability for copyright infringement or plagiarism within the work product of the consultant without protections in place? Would you allow the consultant to use your confidential business information in their work for another client? These are the types of questions employers should ask themselves when considering using an AI tool like ChatGPT.
AI tools may use your data to further train the tool
AI tools like ChatGPT are natural language processing tools, meaning that they can understand and mimic human speech. They usually operate in the form of a chatbot, where users provide prompts and the tool responds in kind. These “conversations” may be further used to train the AI tool. This means that the tool uses the conversations to learn and further improve itself. What makes these AI tools unique from prior chatbots is their scale (the number of trainable parameters), their adaptability to all different types of knowledge, their optimization (by humans through a fine-tuning process), and their ease of use.
Most of these AI tools are easily accessible and are free to use. For example, ChatGPT, which is accessed on OpenAI’s website, allows anyone to create an account and use it for free. Some of these AI tools also allow API access and/or have paid subscriptions, which may impact how user data is used. Users may be subject to different terms of use depending on the type of access that they have. OpenAI, for example, treats user data obtained from ChatGPT and user data obtained from use of their API differently. Under OpenAI’s API data usage policy, end users’ data are not used to train their models by default. Because this disclaimer does not apply to ChatGPT, OpenAI may use data that end users input into ChatGPT to train their models.
AI tools may have restrictions on type of use and require disclosure of that use
Some AI tools may provide that their use should be limited to certain activities, and require disclosure if AI generated content is used publically. Employers should be careful not to violate those terms of use. For example, OpenAI’s current usage policy prohibits the use of ChatGPT for certain activities such as providing financial or medical advice, and for political campaigning and lobbying. OpenAI’s terms of use policy also prohibits users from representing that output from ChatGPT was human-generated, when it is not. Thus, the use of ChatGPT generated output in a public-facing document may require employers disclose that they are using ChatGPT.
Data input into AI tools may risk violating privacy laws
As detailed above, many AI tools use the inputs and outputs toward further training of their models. Thus, employers should carefully monitor the type of data being input into ChatGPT. Employees may submit sensitive business information or data protected under privacy laws such as the California Consumer Privacy Act (CCPA) and/or General Data Protection Regulation (GDPR). However, AI tools may use this data to further train their models. For example, OpenAI provides that: “Data submitted through non-API consumer services ChatGPT or DALL·E may be used to improve our models.” And because AI tools may use data to further train their models, there’s a possibility that a user’s input data could be output verbatim to another user.
Data input into AI tools may risk loss of intellectual property protection
The disclosure of business information to an AI tool may render that information ineligible for intellectual property protection, such as trade secrets or patents. These risks are present even if the AI tool does not use the user data for as part of its training.
Trade secrets are protected by virtue of their secrecy. Thus, it is critical to take reasonable steps to protect the trade secrets from disclosure. This generally includes implementing measures such as clearly labeling trade secrets, training employees on confidentiality policies, and limiting access to secret information. Inputting a trade secret into an AI tool is akin to disclosing that trade secret to a third party – generally such disclosures should not be made without a confidentiality agreement in place. Because there may be no confidentiality guarantee with respect to data input into an AI tool, courts may conclude that the use of the AI tool resulted in public disclosure of the information and loss of trade secret protection. Having policies in place prohibiting the disclosure of sensitive business information to AI tools such as ChatGPT may help businesses protect their trade secrets.
Similar concerns arise with respect to patents. Although, with patents, inventions are ultimately protected through their disclosure, the timing and manner of that disclosure plays a critical role. As explained in a previous blog, a party’s prior public disclosure of an invention may cause an inadvertent surrender of patent rights. Off the cuff public disclosure may lead to frantic last-minute patent drafting and filing efforts, which may be unsuccessful, to avoid such surrender. The disclosure of an invention to an AI tool may be considered one of these off the cuff public disclosures. Because the business is unaware that a public disclosure has occurred, it may not file the required patent application by the correct date (usually prior to one year after the public disclosure) to protect its invention.
AI tools may be susceptible to risks commonly associated with information stored on the cloud
Because the user data is likely retained by AI tools, that data could be susceptible to security concerns such as a data breach due to bugs or malicious actors. Just last month, OpenAI CEO Sam Altman tweeted that ChatGPT had “a significant issue . . . due to a bug in an open source library.” As a result, a small percentage of users were able to see the titles of other users’ conversation histories.
Data output from AI tools may be subject to IP protections
Employers may open themselves up to risk by using content generated by an AI tool. As mentioned above, these AI tools are trained on a large dataset. Some of this information may be protected already by intellectual property laws, or may contain personally identifiable information. Thus, what is output in response to a prompt may contain portions of potentially trademarked, copyrighted, or otherwise protected material. In addition, some of the output generated for one user may be the same or similar to that generated to another user. Employers will face difficulties in trying to determine whether or not generated content contains any already protected data, meaning they may be liable to the actual IP owner for infringement. As an example, OpenAI was recently sued over Codex, an AI tool akin to ChatGPT but specifically for coding, for alleged copyright violations by outputting code that is identical to sample code in a textbook on computer programming.
Data output from AI tools may not be protectable
The protectability of an AI tool’s generated material is questionable. The USPTO issued guidance providing that AI generated material cannot be copyrighted. The USPTO further explained that to the extent a human modified that material, however, those modified aspects could be protected. Similarly, with respect to patents, the Federal Circuit ruled last year in Thaler v. Vidal that AI systems were ineligible as “inventors,” because they are not human, and the Supreme Court recently denied Thaler’s petition for review of the Federal Circuit’s decision.
There is no guarantee that AI tools will present accurate information
There may also be accuracy concerns with AI tools. For example, OpenAI’s terms of use state that in some situations, the output may not accurately reflect real people, places, or facts. Not only that, it is possible that an AI tool will generate extremely plausible falsehoods. After all, the tool is only has good as the dataset it is trained on, and there may be very little visibility regarding that dataset. While the free version of ChatGPT is based on GPT-3.5, which was trained on a known dataset, OpenAI has provided few details regarding the dataset that GPT-4 is trained on.
Use of AI tools may subject businesses to other liabilities
Employers should closely scrutinize the terms of use of any AI tool. Use of ChatGPT, for example, means that the user may be liable to defend and/or indemnify OpenAI from any claims, losses, and expenses (including attorneys’ fees) arising from or relating to the use of ChatGPT. Use of ChatGPT also means that the user agrees to mandatory arbitration and waiver of class action provisions. Furthermore, these terms of use may be subject to change at any time unilaterally based on the AI tool’s creator.
Best practices for implementing AI tools
Other than an outright ban, employers can mitigate the risks of using these AI tools by implementing clear policies outlining allowed and prohibited uses. Prior to any use, employers should scrutinize the AI tool’s terms of use, data usage policies, and/or privacy policies, to the extent that they exist. Employers should thoroughly investigate the capabilities of any AI tool with data that is not business sensitive and free of personally identifying information. Employers should opt to use the versions of AI tools that do not, by default, use input and output data for further training, or have options to turn off that data usage. Finally, employers should always require that any output generated by an AI tool be vetted for accuracy and potential infringement concerns. Each employer’s risk profile is dependent on their specific use cases – seeking individualized advice will help ensure the minimization of these risks.