HB Ad Slot
HB Mobile Ad Slot
M&A Transactions: Diligencing AI Issues with Target Companies
Monday, July 29, 2024

Is your M&A target a company that develops or uses artificial intelligence (“AI”) tools? AI, and generative AI technologies specifically, are powerful business tools but present novel legal issues in the context of M&A transactions. It is increasingly important to identify and understand the unique legal risks associated with the use of AI technologies, tailor your diligence to investigate them and include AI-specific reps and warranties in your deal documents.

To effectively do this, it is important to have someone well-versed in AI technology and the associated legal issues on the deal team. Many subtle issues, if not properly understood and addressed, can lead to liability and/or loss of business value. This article addresses the expansion of due diligence, beyond standard tech diligence, to include the analysis of AI tools developed or used by the target.

Understanding the Legal Issues with AI

This article will cover some of the key AI-specific legal issues to consider in M&A, but the issues in each transaction will be unique depending on the target company’s involvement with AI. For example, a company’s involvement may include one or more of: i) collecting data to license others to train AI; ii) training AI models; developing AI applications; iii) using third party AI applications; iv) fine tuning third party AI models; v) using AI content generated by third parties for the company, among others. Additionally, each AI tool is unique and can present unique legal issues. Moreover, for any given AI tool, different versions (e.g., individual/free and enterprise/paid versions) may exist. Often these tools handle inputs/outputs differently and come with different legal terms. In some cases, the way that a company uses and AI tool may raise different legal considerations. On top of that defining AI tools is difficult. Some tools’ primary purpose is to leverage AI to perform some function. However, AI is being built into many general purpose tools (e.g., browsers, Word and other document creation programs, etc.). While this is not a comprehensive list of the different aspects of AI that can create unique legal issues in a transaction, this provides some sense of the different aspects of AI that need to be considered to understand and assess the AI-specific issues that may be relevant.

AI-specific Diligence

Once you understand the target company’s involvement with AI, it is important to consider the unique legal issues and the diligence needed beyond the standard diligence questions. Here are some examples of topics to cover and why they are important. Again, this is not a comprehensive list but provides some useful examples.

  1. Seek disclosure of all AI-related products and services used or developed by the company. AI can be used for a wide variety of functions and applications. It is important to identify each of the relevant AI technologies used by the target, including the specific version of the technology and the terms that govern use of that technology. Many AI tools have free and paid versions. In general, the free version has much riskier terms, some of which are addressed below (e.g., confidentiality of inputs and indemnities). It is also important to unsure you obtain information for general tools that include AI technology. If the target is developing AI tools it is important to understand whether the development was done legally and employing “responsible AI” practices. This is necessary to develop a comprehensive due diligence plan and assess the legal implications of the target’s AI involvement. 
  2. Does the company have an AI governance committee and an AI policy? Companies that do not have AI governance and policies are more likely to have employees using AI in ways that create legal risk and potential loss of assets. And it is more likely that the company is not managing AI- related legal issues.
  3. Determine what AI-specific vendor diligence the company undertakes. Most companies conduct general technology diligence before adopting a new technology. However, there are several AI-specific vendor diligence questions that companies need to ask. Yet, many companies have not yet updated their vendor diligence checklists to address AI. One reason this is important is that the law imposes duties and liability on both developers and deployers of AI technology. For example, the EEOC held a company liable for a company’s use of an AI recruiting tool that discriminated based on applicants’ age, even though the company did not develop that technology. The Colorado AI consumer protection law specifically imposes obligations on developers and deployers of AI. This means that if the company is using an AI tool, it needs to ensure that such use will not create liability for the company. Some of the topics that are being overlooked include whether the AI tool was trained on content that the vendor legally possessed and had a right to use for AI training, did the vendor use responsible AI development techniques to avoid biased and discriminatory output, does the tool make sensitive decisions (e.g., consumer finance, housing, employment decisions) without human oversight, among others. Much of the information needed to assess these issues resides with the vendor and cannot be determined by just testing an AI tool. Other vendor issues include how the tools handle inputs and outputs? Are they confidential or does the tool reuse them, risking loss of confidentiality? Has the company used any AI tools that do not treat user inputs as confidential? If so, and employees have entered confidential information, that information may no longer be confidential. This is not theoretical. This has happened in some high profile matters.
  4. If the company is training AI models, has it legally obtained and does it have the legal right to use the content to train AI models? Diligence should ensure the company legally obtained the content used and that the company has complied with any licenses under which the content was obtained. Additionally, even if the company legally possesses the content, it is important to determine if it has the legal right to use that content for the purpose of training AI. Some companies have collected user data for years but the privacy policy under which it was collected did not permit the use of that data for purposes of training AI. Training AI on content for which the company does not have the legal right can lead to “algorithmic disgorgement.” This requires a company to destroy the content, the AI models and any algorithms that were created based thereon. This remedy has been enforced by the FTC against multiple entities. Some companies are aware that it may not have the right to train on previously collected data if the prior terms do not address this. To remedy this, it simply changes it terms to permit this. This too may present legal problems. For more on this, see FTC Warns About Changing Terms of Service or Privacy Policy to Train AI on Previously Collected Data.

These are just some examples of how the AI-related data issues can be tricky and why standard diligence may not cover these issues. If part of the deal value is a company’s AI models (and the significant investment it made to create the model), this is a critical issue to assess to avoid potential loss of value.

  1. If the company relies on copyright to protect its assets (e.g., music, images, other media), does it use generative AI to create these assets? If so, this may be a problem because the output of generative AI is typically not subject to copyright protection because it is not deemed human authored. This means that there may not be any copyright protection for these materials. Moreover, if the company uses AI-generated work as part of a larger work and they seek a copyright registration, has it disclosed and disclaimed the AI-generated portion? If not, the copyright registration may be invalid. For more, see Copyright Office Guidance on AI.
  2. Has the company granted any indemnities to AI tool providers? Standard diligence will seek identification of any indemnities that the company has granted, but it is prudent to ask this specific question. The reason is that some companies may not be aware that it granted such an indemnity. Many of the AI tools have a free, individual version and a paid, enterprise version. Many employees use the free version as part of their job (if a company does not have a policy prohibiting such use) and typically do not read the terms of service. However, the terms for many free versions require the user to indemnify the tool provider if the output infringes. Some AI tool providers offer an indemnity to users, but only for some versions of its tool (e.g., the enterprise version). However, in some cases, the indemnity only applies of certain preconditions are met by the company. It is important to conduct diligence to understand if the company is being indemnified and understands and has met the preconditions to obtain the indemnity. For more, see Microsoft to Indemnity Users of Copilot AI Software – Leveraging Indemnity to Help Manage Generative AI Legal Risk
  3. Does the company use any AI code generators? If so, has it updated its open source policy to manage the open source legal risks associated with such use? These tools leverage AI to assist code developers by using AI models to auto complete or suggest code based on developer inputs or tests. These tools are typically trained on open source software which are free to use but have license conditions with which the company must comply. If the output of an AI code generator is used in software the company is developing, the company may need to ensure compliance with the license obligations. Most open source licenses permit the user to copy, modify and redistribute the code. However, the conditions vary by license and can range from simple compliance obligations (e.g., maintaining copyright information) to more onerous, substantive requirements. The more substantive provisions can require that any software that includes or is derived from the open source software must be licensed under the open source license and the source code for that software must be made freely available. This permits others to copy, modify and redistribute the software for free. For companies that develop software to license it for a fee, this can be a huge problem and can cause loss of return on the money invested to develop that software. If the software and the ability to license it for a fee is an important business consideration for the deal, it is critical to conduct thorough diligence on this issue. 
  4. Does the company exclusively own the content that it generated via AI tools? Some AI tools’ terms of use do not grant users exclusive ownership of the output of the tool. In some cases, the tools require that the user grant the tool provider a license to use any output for its own purpose, including to further train AI models. In some terms of use, the tool provider makes clear that the tool may generate the same output for another user. For at least these reasons, the company may not exclusively own the content it is using. In some deals, if there is content that is valuable, this may be an important issue. In other cases, it may not. But it is important to assess these issues.
  5. Address regulatory compliance for certain types of tools and uses of tools. Depending on what an AI tool does and how the company uses it, there may be specific regulatory issues with which the company must comply. It is important to understand the range of legal issues that can arise based on these factors. Diligence should be customized to determine whether and how the company complies. For example, there are specific laws addressing the use of AI tools for employee, consumer finance or housing decisions. In some cases, there are conditions and limitations on using an AI tool for employment decisions. Diligence should determine if the company has

met the conditions and operates within the limitations. Some new laws require certain disclosures when using an AI chatbot. If they company is using a chatbot, it is important to understand what laws apply and whether the company has complied with them. Additionally, certain AI tools output medical or legal information. It is important to ensure the outputs do not cross the line and give medical or legal advice, which can constitute the unauthorized practice of medicine or law. Other regulatory issues may be relevant based on the function of the tool or the company’s use.

This is one of the reasons it is important to have some knowledgeable of AI law on the deal team. Many of these laws are new and not all lawyers are up to date on some of the more nuanced AI-specific laws.

Conclusion

These are just some of the reasons that AI-specific diligence is needed and some of the topics that should be covered. It is important to include someone knowledgeable of AI law and technology on the deal team to ensure the relevant diligence issues can be identified and addressed based on the specifics of each deal. It is also important to be able to evaluate the responses, understand their significance, determine the need for additional follow up and draft and negotiate appropriate representations and warranties. If the firm handling the transaction does not have deep AI experience, consider hiring special AI counsel for these issues.

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins