As artificial intelligence and machine learning (collectively “AI”) models continue to develop, become more sophisticated, and generate significant media coverage, companies are increasingly considering the integration of AI applications and tools into their routine operations. These powerful language models offer a range of possibilities, from streamlining customer service to automating content generation.
Generative AI Defined
Generative AI applications, also known as large language models, are a subset of AI that focuses on creating new content or information. Unlike traditional AI models that rely on pre-existing data or rules, generative AI models have the ability to create new, unique, or novel output by learning patterns and structures from vast amounts of data on which they are trained. Generative AI applications are designed to understand and generate a human-like response based on the prompt or other input received from the user. Generative AI models have the potential to produce creative and contextually relevant content, ranging from conversational responses to articles, fictional stories, poetry, artwork and music. The power of generative AI lies in its ability to generate new and meaningful content that can mimic human-like conversation, making it a promising tool for various business functions and operations.
Generative AI models are exposed to extensive pools of training data to teach them language patterns and structures. This allows generative AI models to produce responses and content that appear as if they were created by a person as opposed to by a computer.
Developers have begun developing AI applications and platforms for specific purposes or directed at specific markets, These include AI applications specifically developed to perform human resource or legal functions or serve as online customer service chat agents. While those may cause related issues or concerns, this article addresses generative AI applications only and does not address those discrete AI applications that are developed for specific purposes.
Potential Issues and Concerns
While generative AI applications provide many opportunities, they also create a number of issues, risks and concerns. These concerns must be carefully analyzed and addressed when a company considers utilizing generative AI applications to perform regular or routine business functions. By understanding and proactively addressing these concerns, companies can harness the benefits of AI while navigating potential pitfalls, ensuring successful and responsible implementation while minimizing their risk.
Accuracy
It is important to acknowledge that the responses generated by generative AI applications, while often helpful and informative, may be misleading or even entirely wrong. While generative AI applications are trained on a vast amount of data, they do not possess genuine cognitive understanding. Further, some generative AI applications are continuously being trained on new data but others are not exposed to new training data after their original training and do not offer current or real-time knowledge. As a result, responses or content produced by generative AI applications may not always reflect the most current or up-to-date information and may provide incorrect or outdated facts.
AI applications can produce reasonable and plausible-sounding replies to user prompts that are completely wrong or entirely fabricate which are often described as a hallucination, confabulation or delusion. While this may be fine if the application is used for a creative task such as to create a fictional story or poetry, it is problematic if the task is to draft a factually correct article, white paper, marketing piece, report or other content that must be accurate. The most dangerous aspect of hallucinations, however, are that they appear to be real, accurate, and appropriate so it may often be difficult to discern an AI application generated response that is accurate from one that is a hallucination.
Potential Copyright Issues
Generative AI applications are trained on vast amounts of data which often includes copyrighted material obtained from a variety of sources. Frequently, those sources include simply scraping the internet and a large number of websites for training data content. The result is that content produced by generative AI applications may incorporate copyrighted materials or materials that may infringe upon intellectual property rights. Generative AI applications that produce visual content such as artwork or simulated photos, for instance, may include a sufficient amount of an actual photo or exiting piece of art as to infringe on the copyright associated with that work.
Several authors recently filed lawsuits against two developers of generative AI Applications alleging copyright violations for inclusion of their books in training data for generative AI training applications. It is unclear how those lawsuits will be resolved and the impact they will have on AI copyright issues but it will be an issue of concern until they are.
The user of the content produced by a generative AI application is responsible for complying with copyright laws regardless of where the content originated. Any company that intends to use content produced by generative AI applications, especially content to be published or used commercially in any manner, should carefully review the content produced by a generative AI application to confirm that it is unique and original and that it doesn’t contain any material that violates any other party’s intellectual property rights.
Further, under U.S. law, the U.S. Copyright Office only recognizes copyright and provides protection for works created or produced by human beings. According to recent guidance provided by the U.S. Copyright Office, when AI primarily determines the expressive elements of a work, the generated material is not the product of human authorship and therefore not copyrightable. As a result, any content that is primarily generated by a generative AI application or other AI tool may not be eligible for copyright protection without significant additional human contribution.
Ownership of Content
Ownership concerns regarding content produced by generative AI applications arise due to the unique nature of the content it produces. While users' input prompts and guide the conversation, the ownership of the content generated by generative AI applications can be a complex matter that is subject to the terms of service and specific license agreements between the user and the developer of the generative AI application. The terms of service and the related agreements govern users’ rights in the content produced by a generative AI application and also any ownership rights a user may relinquish when inputting prompts or other information into the generative AI system.
Bias
Bias in the content produced by generative AI applications is a significant consideration when using AI language models. Generative AI applications are trained on a vast amount of data, which may include various types of biases present in the data used for training. These biases may include actual discriminatory or inflammatory content and societal prejudices or underrepresentation of certain groups or perspectives. Because generative AI applications do not possess consciousness or cognitive understanding, they are unable to independently recognize bias, be sensitive to it, or understand its reply in response to prompts may include bias. Consequently, the responses produced by generative AI applications can reflect and perpetuate the bias that existed within its training data. Generative AI application developers have made efforts to mitigate bias during the training process, but eliminating all biases is a complex challenge that is beyond the capability of current AI models and algorithms.
Generative AI application users should be aware of this potential bias and critically evaluate the content produced by generative AI applications for potential bias and other related issues.
Managing the Use of AI as Part of Poutine Business Operations
A company that wants to begin using generative AI applications to perform routine business functions should start by establishing robust corporate controls, policies, and oversight mechanisms. By proactively addressing these issues, companies can harness the potential efficiencies offered by generative AI applications while mitigating potential risks and safeguarding their reputation. There are a number of steps that organizations should consider proactively taking prior to deploying or using generative AI applications to perform regular or routine business functions in order to assure the safe and efficient use of the technology.
Company Governance
When considering the integration of generative AI applications to perform business functions, the first step every company should consider is establishing written governance over who within the organization permitted to use generative AI applications, for what purposes, and with what permissions or authorizations. Written governance will enable companies to retain control over and manage the use of generative AI applications within their organizations. Conversely, a company that fails to establish a formal governance policy risks having employees using the generative AI applications without oversight, without training and support, without understanding its limitations, and without appreciating the risk to the organization discussed above.
Written Policies
In addition to formal company governance, companies seeking to deploy generative AI applications should establish and provide employees written policies, guidelines and best practices related to the use of generative AI applications. These policies should include guidelines and expectations for employees and stakeholders regarding the use of generative AI applications and address key issues of concern such as data privacy, security, ethical considerations, and compliance with applicable laws and regulations.
Clear guidelines should be established for the appropriate use of generative AI applications, specifying which types of tasks it can be utilized for and any limitations that need to be considered. The policies should also address issues related to bias mitigation to ensure that employees are aware of the potential biases in AI-generated content and requiring review specifically for the purpose of mitigating any such potential bias.
Regular Review and Feedback
Companies deploying generative AI applications should also include internal processes for the ongoing monitoring and evaluation of the performance of the generative AI application. The purpose of these reviews should include identifying areas of concern, opportunities for improvement, and to ensure that the use of generative AI applications aligns with the company’s overall goals and values. Regular feedback opportunities with employees that use generative AI applications can provide insights for optimization and allow for continuous refinement of the system's usage within the business operations.
Companies deploying generative AI should also regularly review written policies, guidelines and best practices to ensure that they are adapting to and keeping up with changing circumstances and to address any emerging ethical, legal, or operational concerns associated with the use of generative AI applications.
Internal Expertise
Companies should consider identifying a specific employee within the organization who is tasked with developing and maintaining an expertise in generative AI applications. This employee’s role is to serve as an internal resource for the rest of the company, monitor the ongoing development of generative AI applications, and advocate for responsible and effective implementation of the technology. Employing an internal expert in the deployment and use of generative AI applications will help ensure that employees receive proper guidance and training on using generative AI applications effectively, responsibly, and legally.
Legal Guidance
As discussed above, AI technologies such as generative AI applications can implicate a number of legal considerations and concerns. Consulting with legal experts with experience in issues around AI technology helps to assure that the company, its leadership, and its board understand the applicable laws and regulations pertaining to data privacy, intellectual property, consumer protection, and other relevant areas that are implicated by generative AI applications and that they have established mechanisms to address them.
Legal counsel can help company leadership assess and mitigate any potential risks, liabilities, and legal obligations associated with the use of generative AI applications. Counsel can provide guidance on drafting and implementing appropriate terms of service, privacy policies, and user agreements that specifically address the use of generative AI applications. Additionally, legal counsel can assist in addressing copyright concerns, fair use considerations, and any potential issues related to content ownership and licensing. By obtaining legal guidance, companies can proactively mitigate risks, protect their interests, and ensure compliance with the evolving legal landscape surrounding the use of generative AI applications.