Last week, a New York Times’ article discussed ChatGPT and AI’s “democratization of disinformation,” along with their potentially disruptive effects on upcoming political contests. Asking a chatbot powered by generative AI to produce a fundraising email is not the main concern, according to the article. Leveraging that technology to create and disseminate disinformation and deepfakes is. Some of the tactics described in the article intended to further political goals are unsettling for and well beyond politics, including the workplace.
“Now any amateur with a laptop can manufacture the kinds of convincing sounds and images that were once the domain of the most sophisticated digital players. This democratization of disinformation is blurring the boundaries between fact and fake…”
Voice-cloning tools could be used, for example, to create convincing audio clips of political figures. One clip might convey a message that is consistent with the campaign’s platform, albeit never uttered by the candidate. Another clip might be produced to position the candidate in a bad light by suggesting the candidate was involved in illicit behavior or conveyed ideas damaging to the campaign, such as using racially-charged language. Either way, such clips would be misleading to the electorate. The same would be true of AI-generated images or videos.
“And as synthetic media gets more believable, the question becomes: What happens when people can no longer trust their own eyes and ears?”
It’s not hard to see how these same technologies, which are increasingly accessible by most anyone and relatively easy to use, can create significant disruption and legal risk in workplaces across the country. Instead of creating a false narrative about a political figure, a worker disappointed in his annual review might generate and covertly disseminate a compromising “video” of his supervisor. The failure to investigate a convincing deepfake video could have substantial and unintended consequences. Of course, the creation of this kind of misinformation can be directed at executives and the company as a whole.
Damaging disinformation and deepfakes are not the only risks posed by generative AI technologies. To better understand the kinds of risks an organization might face, assessing how workers are using ChatGPT and other similar generative AI technologies is a good first step. If a group of workers are like the millions of other people using ChatGPT, activities might include performing research, preparing draft communications such as the fundraising email in the NYT article discussed above, coding, and other tasks. Workers in different industries with different responsibilities likely will be approaching the technology with different needs and identifying a range of creative use cases.
Greater awareness about the uses of generative AI in an organization can help with policy development, but there are some policies that might make sense for most if not all applications of this technology.
Other workplace policies generally apply. As good example of this is harassment and nondiscrimination policies. As with an employee’s activity in social media, an employee’s use of ChatGPT is not shielded from existing policies on discrimination or harassment of others. Existing policies should apply.
Follow the application’s terms and understand its limitations. Using online resources for company business in violation of the terms of use of those resources could create legal exposure for organizations. Also, employees should be aware of the capabilities and limitations of the tools they are using. For instance, while ChatGPT may seem omniscient, it is not, and it may not be up to date – OpenAI notes “ChatGPT’s training data cuts off in 2021.” Employees can avoid a little embarrassment for the organization (and themselves) knowing this kind of information.
Avoid impermissible sharing of data. ChatGPT is just that, a chat or conversation with OpenAI that employees at OpenAI can view:
Who can view my conversations?
As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements.
Employees should avoid sharing personal information as well as confidential information about the company or its customers without understanding the applicable obligations that may apply. For example, there may be contractual obligations to customers of the organization prohibiting the sharing of their confidential information with third parties. Similar obligations could be established through website privacy policies or statements through which an organization has represented how it would share certain categories of information.
Establish a review process to avoid improper uses. Information generated through AI-powered tools and platforms may not be what it seems. It may be inaccurate, incomplete, biased, or it may infringe on another’s intellectual property rights. The organization may want to conduct a review of certain content obtained through the tool or platform to avoid subpar service to customers or an infringement lawsuit.
There is a lot to think about when considering the impacts of ChatGPT and other generative AI technologies. This includes carefully wading through political blather during the imminent election season. It also includes thinking about how to minimize risk related to these technologies in the workplace. Part of that can be accomplished through policy, but there are other steps to consider, such as employee training, monitoring utilization, etc.